Contents^

Table of Contents
date title user score
2021-10-09 19:15:34 Graph of Keybase commits pre and post Zoom acquisition 0des 177
2021-10-05 07:30:30 Startup Ideas luu 222
2021-10-05 13:15:42 It is easier to educate a Do-er than to motivate the educated tosh 446
2021-09-30 09:59:40 Are software engineering “best practices” just developer preferences? floverfelt 316
2021-09-30 10:50:30 Major Quantum Computing Strategy Suffers Serious Setbacks elsewhen 90
2021-09-29 09:27:45 Attempts to scientifically “rationalize” policy may be damaging democracy anarbadalov 235
2021-09-29 04:18:46 Response to 'Call for Review: Decentralized Identifiers (DIDs) v1.0' lorn3 86
2021-09-29 18:01:08 Apple didn't revolutionize power supplies; new transistors did (2012) Rondom 208
2021-09-27 18:02:51 What does my engineering manager do all day? mooreds 187
2021-09-23 12:29:15 Using two keyboards at once for pain relief ruffrey 349
2021-09-22 10:52:56 Waydroid – Run Android containers on Ubuntu pabs3 684
2021-09-16 11:36:55 Biologists Rethink the Logic Behind Cells’ Molecular Signals theafh 104
2021-09-16 23:47:13 The Shunting-yard algorithm converts infix notation to RPN westurner 2
2021-09-16 23:46:10
2021-09-13 20:13:29 How should logarithms be taught? raviparikh 34
2021-09-15 12:12:35 Automatic cipher suite ordering in Go’s crypto/tls FiloSottile 122
2021-09-14 04:50:14 Scikit-Learn Version 1.0 m3at 260
2021-09-14 09:11:22 Signed Exchanges on Google Search oedmarap 5
2021-09-11 17:43:17 AlphaGo documentary (2020) [video] rdli 248
2021-09-11 11:16:26 Interpretable Model-Based Hierarchical RL Using Inductive Logic Programming YeGoblynQueenne 66
2021-09-13 07:41:02 Ship / Show / Ask: A modern branching strategy NicoJuicy 157
2021-09-13 13:38:02 Show HN: TweeView – A Tree Visualisation of Twitter Conversations edent 55
2021-09-11 15:07:03 Wireless Charging Power Side-Channel Attacks tosh 68
2021-09-11 15:07:11 How We Proved the Eth2 Deposit Contract Is Free of Runtime Errors michaelsbradley 179
2021-09-12 08:36:03 Physics-Based Deep Learning Book Anon84 195
2021-09-10 03:38:41 Ask HN: Books that teach you programming languages via systems projects? Foe 204
2021-09-04 16:40:30 How you can track your personal finances using Python siddhant 140
2021-09-09 17:22:35 CISA Lays Out Security Rules for Zero Trust Clouds CrankyBear 6
2021-09-09 07:53:55 Show HN: Heroku Alternative for Python/Django apps appliku 183
2021-09-09 13:33:54 SPDX Becomes Internationally Recognized Standard for Software Bill of Materials warp 10
2021-09-07 03:35:39 Show HN: Arxiv.org on IPFS hugoroussel 238
2021-09-04 13:46:05 New Texas Abortion Law Likely to Unleash a Torrent of Lawsuits Against Education gamontserrat 118
2021-09-02 20:25:43 DARPA grant to work on sensing and stimulating the brain noninvasively [video] grawprog 83
2021-09-02 19:55:58 New Ways to Be Told That Your Python Code Is Bad nickdrozd 102
2021-09-03 05:14:58 Web-based editor pjmlp 564
2021-09-03 06:48:06 GitHub Copilot Generated Insecure Code in 40% of Circumstances During Experiment elsombrero 261
2021-09-01 10:00:44 AAS Journals Will Switch to Open Access sohkamyung 215
2021-08-30 23:46:28 White House Launches US Digital Corps elsewhen 160
2021-08-25 08:13:52 Launch HN: Litnerd (YC S21) – Teaching kids to read with the help of live actors Anisa_Mirza 127
2021-08-27 12:20:28 Nimforum: Lightweight alternative to Discourse written in Nim ducktective 172
2021-08-21 12:21:58 An Opinionated Guide to Xargs todsacerdoti 402
2021-08-20 21:41:10 Enhanced Support for Citations on GitHub chenzhekl 80
2021-08-18 17:51:46 Canada calls screen scraping ‘unsecure,’ sets Open Banking target for 2023 exotree 349
2021-08-13 09:03:22 Interactive Linear Algebra (2019) natemcintosh 365
2021-08-12 16:12:15 Git password authentication is shutting down judge2020 440
2021-08-12 10:33:42 A future for SQL on the web rasmusfabbe 925
2021-08-01 11:34:17 Show HN: Python Source Code Refactoring Toolkit via AST treesciencebot 110
2021-08-03 09:27:50 Emacs' org-mode gets citation support NeutralForest 234
2021-08-03 11:55:43 NSA Kubernetes Hardening Guidance [pdf] kennethko 635
2021-07-31 02:56:35 Hosting SQLite Databases on GitHub Pages isnotchicago 567
2021-07-22 23:42:24 Ask HN: Any good resources on how to be a great technical advisor to startups? _009 21
2021-07-11 21:23:27 Teaching other teachers how to teach CS better robfig 156
2021-07-06 12:15:22 Ask HN: Best online speech / public speaking course? i_am_not_elon 33
2021-06-30 21:39:31 Google sunsets the APK format for new Android apps kevin_thibedeau 142
2021-06-22 12:30:43 A from-scratch tour of Bitcoin in Python yigitdemirag 1187
2021-06-13 17:33:23 An Omega-3 that’s poison for cancer tumors elorant 255
2021-06-08 19:09:39 Discover and Prevent Linux Kernel Zero-Day Exploit Using Formal Verification vzaliva 3
2021-06-04 13:28:44 Anatomy of a Linux DNS Lookup belter 168
2021-05-29 02:59:45 JupyterLite – WASM-powered Jupyter running in the browser ahurmazda 205
2021-05-26 16:05:11 Accenture, GitHub, Microsoft and ThoughtWorks Launch the GSF scottcha 4
2021-05-27 14:21:55 DRAM Alternative Developed: 4X Higher Density at Higher Speed and Lower Power billyharris 14
2021-05-27 11:07:44 Rocky Linux releases its first release candidate sparcpile 147
2021-05-26 06:09:46 USB-C is about to go from 100W to 240W, enough to power beefier laptops Tomte 427
2021-05-25 12:02:06 Half-Double: New hammering technique for DRAM Rowhammer bug fqazi 189
2021-05-20 15:20:29 Setting up a Raspberry Pi with 2 Network Interfaces as a simple router geerlingguy 126
2021-05-19 03:20:31 What to do about GPU packages on PyPI? polm23 123
2021-05-18 17:17:39 Markdown Notes VS Code extension: Navigate notes with [[wiki-links]] julienreszka 2
2021-05-11 14:07:44 Ask HN: Choosing a language to learn for the heck of it bsg75 13
2021-05-10 11:53:54 Show HN: Django SQL Dashboard simonw 202
2021-05-06 13:33:00 Interactive IPA Chart Jeud 243
2021-05-06 16:15:29 Google Dataset Search abraxaz 386
2021-05-04 20:09:49 Ask HN: Cap Table Service Recommendations Ankaios 1
2021-05-02 12:43:15 Hosting SQLite databases on GitHub Pages or any static file hoster phiresky 1808
2021-04-23 13:11:53 Wasm3 compiles itself (using LLVM/Clang compiled to WASM) theBashShell 178
2021-04-24 01:18:52 Remote code execution in Homebrew by compromising the official Cask repository spenvo 387
2021-04-22 12:51:22 Semgrep: Semantic grep for code ievans 415
2021-04-10 09:05:03 Ask HN: What to use instead of Bash / Sh for scripting? lordgroff 52
2021-04-09 13:11:27 Estonian Electronic Identity Card and Its Security Challenges [pdf] IndrekR 72
2021-04-08 20:49:08 Systemd makes life miserable, again, this time by breaking DNS bcrl 5
2021-04-08 21:35:57 Ask HN: How bad is proof-of-work blockchain energy consumption? furrowedbrow 2
2021-03-30 17:42:22 What does a PGP signature on a Git commit prove? JNRowe 147
2021-03-30 06:40:47 Breakthrough for ‘massless’ energy storage reimertz 233
2021-03-25 10:08:52 OpenSSL Security Advisory arkadiyt 327
2021-03-26 14:15:06 How much total throughput can your wi-fi router really provide? giuliomagnifico 84
2021-03-23 17:28:53 The Most Important Scarce Resource Is Legitimacy ve55 119
2021-03-19 11:31:14 A few notes on message passing srijan4 151
2021-03-11 13:41:42 Duolingo's language notes all on one page rococode 265
2021-03-11 12:19:20 Ask HN: The easiest programming language for teaching programming to young kids? simplerman 25
2021-03-07 10:09:22 Raspberry Pi for Kill Mosquitoes by Laser ColinWright 342
2021-03-07 10:16:33 Donate Unrestricted razin 288
2021-03-02 09:55:02 Bitcoin Is Time taylorwc 442
2021-02-28 06:34:44 Foundational Distributed Systems Papers mastabadtomm 253
2021-02-28 21:46:20 Low-Cost Multi-touch Whiteboard using the Wiimote (2007) [video] jstrieb 49
2021-02-27 21:56:01 How to Efficiently Choose the Right Database for Your Applications gesaint 80
2021-02-21 17:26:02 A Data Pipeline Is a Materialized View nchammas 144
2021-02-18 06:17:14 There’s no such thing as “a startup within a big company” isolli 635
2021-02-18 03:21:39 Ask HN: Keyrings: per-package/repo; commit, merge, and release keyrings? westurner 1
2021-02-13 01:42:16 Threat Actors Now Target Docker via Container Escape Features pizza 134
2021-02-11 23:09:15 Ask HN: What security is in place for bank-to-bank EFT? andrewon 1
2021-02-11 09:06:18 Podman: A Daemonless Container Engine lobo_tuerto 320
2021-02-10 07:54:05 Cambridge Bitcoin Electricity Consumption Index apples_oranges 979
2021-02-10 13:41:55 Bitcoin's fundamental value is negative given its environmental impact martinlaz 134
2021-02-05 13:41:13 Ask HN: What are some books where the reader learns by building projects? Shosty123 53
2021-02-05 09:05:57 Is it wrong to demand features in open-source projects? theabbie 8
2021-02-02 09:43:58 CompilerGym: A toolkit for reinforcement learning for compiler optimization azhenley 139
2021-01-24 07:17:14 Turning desalination waste into a useful resource thereare5lights 42
2021-01-26 17:41:26 Evcxr: A Rust REPL and Jupyter Kernel batterylow 170
2021-01-24 16:51:25 Ask HN: What is the cost to launch a SaaS business MVP mikesabbagh 16
2021-01-23 17:03:11 Cryptocurreny crime is way ahead of regulators and law enforcement dgellow 114
2021-01-22 14:39:19 Ask HN: Why aren't micropayments a thing? wppick 106
2021-01-21 18:34:43 Elon Musk announces $100M carbon capture prize tito 11
2021-01-11 08:20:39 Tim Berners-Lee wants to put people in control of their personal data IvanSologub 238
2021-01-11 07:08:49 Governments spurred the rise of solar power jakozaur 133
2021-01-05 07:50:03 Termux no longer updated on Google Play martinlaz 362
2021-01-01 16:57:02 Ask HN: What should go in an Excel-to-Python equivalent of a couch-to-5k? etothepii 9
2020-12-28 08:06:01 Scientists turn CO2 into jet fuel vanburen 61
2020-12-27 14:16:01 Show HN: Stork: A customizable, WASM-powered full-text search plugin for the web jil 137
2020-12-27 14:14:42 Upptime – GitHub-powered open-source uptime monitor and status page fahrradflucht 301
2020-12-26 11:31:47 Show HN: Simple-graph – a graph database in SQLite dpapathanasiou 236
2020-12-24 10:54:18 In CPython, types implemented in C are part of the type tree todsacerdoti 108
2020-12-16 08:15:03 Experiments on a $50 DIY air purifier that takes 30s to assemble dyno-might 292
2020-12-13 06:07:56 Goodreads plans to retire API access, disables existing API keys buttscicles 869
2020-12-11 16:33:14 Turing Tumble Simulator tobias2014 2
2020-11-30 07:53:34 Python Pip 20.3 Released with new resolver groodt 224
2020-11-23 14:39:50 Convolution Is Fancy Multiplication ubac 397
2020-11-18 10:09:55 How to better ventilate your home arunbahl 101
2020-11-06 13:01:34 Quantum-computing pioneer Peter Shor warns of complacency over Internet security headalgorithm 2
2020-11-05 01:11:47 CERN Online introductory lectures on quantum computing from 6 November limist 277
2020-11-03 19:31:07 A Manim Code Template HaoZeke 2
2020-10-21 20:28:21 Startup Financial Modeling: What is a Financial Model? (2016) aaronbski 229
2020-10-16 18:23:29 At what grade level do presidential candidates debate? the_afonseca 51
2020-10-11 14:30:27 ElectricityMap – Live CO₂ emissions of electricity production and consumption jka 221
2020-10-09 02:53:03 Bash Error Handling sohkamyung 287
2020-10-09 18:37:09 A Customer Acquisition Playbook for Consumer Startups jcs87 129
2020-10-06 02:34:07 Gathering all open and sustainable technology projects protontypes 3
2020-10-05 11:50:57 Jupyter Notebooks Gallery jeffnotebook 101
2020-10-03 16:33:30 NestedText, a nice alternative to JSON, YAML, TOML nestedtext 302
2020-10-04 12:21:50 Algorithm discovers how six molecules could evolve into life’s building blocks gmays 390
2020-10-02 14:16:05 Physicists build circuit that generates clean, limitless power from graphene westurner 42
2020-09-29 17:37:53 Mozilla shuts project Iodide: Datascience documents in browsers ritwiksaikia 46
2020-09-27 07:18:50 Ask HN: What are good life skills for people to learn? smarri 254
2020-09-23 22:04:25 Four Keys Project metrics for DevOps team performance westurner 3
2020-09-19 09:13:45 Ask HN: Resources to encourage teen on becoming computer engineer? tomrod 111
2020-09-18 14:10:34 CadQuery: A Python parametric CAD scripting framework based on OCCT OJFord 134
2020-09-17 01:31:25 Array Programming with NumPy hardmaru 289
2020-09-17 16:38:37 Do you like the browser bookmark manager? andyware 6
2020-09-17 12:58:46 NIST Samate – Source Code Security Analyzers animationwill 71
2020-09-17 04:19:49 A Handwritten Math Parser in 100 lines of Python gnebehay 64
2020-09-15 06:25:51 PEP – An open source PDF editor for Mac threcius 191
2020-09-12 10:49:38 The Unix timestamp will begin with 16 this Sunday dezmou 452
2020-09-11 07:36:54 Redox: Unix-Like Operating System in Rust bpierre 242
2020-09-11 09:31:37 Ask HN: How are online communities established? jayshua 127
2020-09-10 20:19:41 Python Documentation Using Sphinx keyboardman 1
2020-09-10 07:18:54 Traits of good remote leaders sfg 356
2020-09-09 22:07:59 Show HN: Eiten – open-source tool for portfolio optimization hydershykh 200
2020-09-08 09:51:43 Ask HN: Any well funded tech companies tackling big, meaningful problems? digitalmaster 97
2020-09-07 17:50:02 Column Names as Contracts MaysonL 55
2020-09-06 00:49:07 Graph Representations for Higher-Order Logic and Theorem Proving (2019) brzozowski 104
2020-09-04 22:37:14 Show HN: Linux sysadmin course, eight years on snori74 780
2020-09-03 05:12:02 Software supply chain security mayakacz 82
2020-09-01 13:53:23 Mind Emulation Foundation gk1 93
2020-08-31 22:41:41 13 Beautiful Tools to Enhance Online Teaching and Learning Skills alikayaspor 15
2020-08-28 06:34:50 How close are computers to automating mathematical reasoning? auggierose 100
2020-08-29 11:06:54 New framework for natural capital approach to transform policy decisions westurner 2
2020-08-24 09:19:08 Challenge to scientists: does your ten-year-old code still run? sohkamyung 305
2020-08-19 14:54:09 A deep dive into the official Docker image for Python itamarst 189
2020-08-18 19:01:49 The Consortium for Python Data API Standards BerislavLopac 102
2020-08-07 15:02:57 Tech giants let the Web's metadata schemas and infrastructure languish timhigins 301
2020-08-10 10:39:15 Time-reversal of an unknown quantum state samizdis 23
2020-08-08 12:48:07 Electric cooker an easy, efficient way to sanitize N95 masks, study finds johnny313 201
2020-08-09 19:13:33 Fed announces details of new interbank service to support instant payments tigerlily 682
2020-08-08 21:17:02 Shrinking deep learning’s carbon footprint dsavant 4
2020-08-02 10:44:33 Show HN: Starboard – Fully in-browser literate notebooks like Jupyter Notebook protoduction 369
2020-07-23 16:11:55 Ask HN: Learning about distributed systems? shahrk 35
2020-08-01 22:13:32 Ask HN: How can I “work-out” critical thinking skills as I age? treyfitty 87
2020-07-29 17:21:42 The tragedy of FireWire: Collaborative tech torpedoed by corporations segfaultbuserr 3
2020-07-29 17:17:29 The Developer’s Guide to Audit Logs / SIEM endingwithali 9
2020-07-29 04:26:06 Del.icio.us kome 1649
2020-07-24 19:37:41 Ask HN: Recommendations for Books on Writing? wwright 5
2020-07-23 14:10:29 Ask HN: How did you learn x86-64 assembly? spacechild1 48
2020-07-22 09:39:11 Brain connectivity levels are equal in all mammals, including humans: study hhs 197
2020-07-22 04:21:32 Ask HN: Resources to start learning about quantum computing? edu 185
2020-07-21 11:58:25 Launch HN: Charityvest (YC S20) – Employee charitable funds and gift matching Leonidas243 64
2020-07-20 16:52:03 We Need a Yelp for Doctoral Programs etattva 180
2020-07-20 01:21:29 All of the World’s Money and Markets in One Visualization hippich 135
2020-07-18 21:06:28 Why companies lose their best innovators (2019) hhs 190
2020-07-17 16:48:58 Powerful AI Can Now Be Trained on a Single Computer MindGods 282
2020-07-10 12:30:36 Ask HN: Something like Khan Academy but full curriculum for grade schoolers? jmspring 283
2020-07-09 13:35:44 AutoML-Zero: Evolving Code That Learns theafh 34
2020-07-06 08:25:22 SymPy - a Python library for symbolic mathematics ogogmad 209
2020-07-03 17:05:31 Ask HN: Are there any messaging apps supporting Markdown? 5986043handy 19
2020-06-24 12:36:53 What vertical farming and ag startups don't understand about agriculture kickout 348
2020-06-15 05:26:29 Ask HN: What are your go to SaaS products for startups/MVPs? lbj 169
2020-06-13 08:31:58 Ask HN: Do you read aloud or silently in your minds? Onceagain 6
2020-06-08 08:42:33 Ask HN: How do you deploy a Django app in 2020? eptakilo 3
2020-06-04 21:35:38 Containers from first principles setheron 102
2020-05-27 17:16:56 How many people did it take to build the Great Pyramid? samizdis 136
2020-05-14 16:44:28 Solar’s Future is Insanely Cheap epistasis 152
2020-05-20 14:52:29 Demo of an OpenAI language model applied to code generation [video] cjlovett 281
2020-05-04 18:51:16 Future of the human climate niche origgm 96
2020-05-15 06:25:43 Ask HN: Best resources for non-technical founders to understand hacker mindset? jamiecollinson 114
2020-05-11 10:08:31 Dissecting the code responsible for the Bitcoin halving Mojah 39
2020-04-30 13:06:53 Ask HN: Does mounting servers parallel with the temperature gradient trap heat? westurner 2
2020-04-26 16:33:13 Psychological techniques to practice Stoicism hoanhan101 173
2020-04-25 10:00:05 What does the 'rc' in `.bashrc`, etc. mean? janvdberg 297
2020-04-23 16:19:24 Google ditched tipping feature for donating money to sites caution 2
2020-04-23 15:58:23 Innovating on Web Monetization: Coil and Firefox Reality stareatgoats 2
2020-04-19 22:24:07 Ask HN: Recommendations for online essay grading systems? westurner 1
2020-04-19 22:28:00 Ask HN: Systems for supporting Evidence-Based Policy? westurner 1
2020-04-19 14:54:31 Facebook, Google to be forced to share ad revenue with Australian media docdeek 148
2020-04-11 12:36:55 France rules Google must pay news firms for content us0r 134
2020-04-05 03:00:45 Adafruit Thermal Camera Imager for Fever Screening jonbaer 2
2020-03-31 18:08:57 The end of an Era – changing every single instance of a 32-bit time_t in Linux zdw 165
2020-04-01 01:16:29 Ask HN: What's the ROI of Y Combinator investments? longtermd 4
2020-04-01 00:41:15 Microsoft announces Money in Excel powered by Plaid chirau 3
2020-03-30 02:02:12 Lora-based device-to-device smartphone communication for crisis scenarios [pdf] oliver2213 90
2020-03-27 17:56:01 LoRa+WiFi ClusterDuck Protocol by Project OWL for Disaster Relief westurner 3
2020-03-26 02:53:34 A Visual Debugger for Jupyter sandGorgon 197
2020-03-27 18:45:26 Ask HN: What's the Equivalent of 'Hello, World' for a Quantum Computer? simonblack 2
2020-03-27 18:43:58 Ask HN: Communication platforms for intermittent disaster relief? westurner 1
2020-03-27 18:06:49 DroneAid: A Symbol Language and ML model for indicating needs to drones, planes westurner 2
2020-03-26 06:52:53 Ask HN: Computer Science/History Books? jackofalltrades 327
2020-03-26 06:07:26 Open-source security tools for cloud and container applications alexellisuk 53
2020-03-25 14:26:44 YC Companies Responding to Covid-19 no_gravity 144
2020-03-23 18:21:18 Show HN: Neh – Execute any script or program from Nginx location directives oap_bram 27
2020-03-21 15:39:25 Ask HN: How can a intermediate-beginner learn Unix/Linux and programming? learnTemp229462 146
2020-03-20 09:40:37 Math Symbols Explained with Python amitness 130
2020-03-20 00:16:15 Ask HN: Is there way you can covert smartphone to a no contact thermometer? shreyshrey 9
2020-03-15 05:47:35 Employee Scheduling weitzj 641
2020-03-14 07:01:16 Show HN: Simulation-based high school physics course notes lilgreenland 295
2020-03-15 04:58:04 WebAssembly brings extensibility to network proxies pjmlp 132
2020-03-14 00:29:09 Pandemic Ventilator Project mhb 318
2020-03-14 02:53:51 Low-cost ventilator wins Sloan health care prize (2019) tomcam 99
2020-03-13 19:22:55 AI can detect coronavirus from CT scans in twenty seconds laurex 109
2020-03-10 16:08:03 AutoML-Zero: Evolving machine learning algorithms from scratch lainon 260
2020-03-10 16:48:16 Options for giving math talks and lectures online chmaynard 143
2020-03-04 06:29:43 Aerogel from fruit biowaste produces ultracapacitors dalf 152
2020-03-03 05:09:35 Ask HN: How to Take Good Notes? romes 293
2020-03-03 06:36:58 Ask HN: STEM toy for a 3 years old? spapas82 117
2020-02-29 14:17:55 OpenAPI v3.1 and JSON Schema 2019-09 BerislavLopac 88
2020-02-26 03:06:01 Git for Node.js and the browser using libgit2 compiled to WebAssembly mstade 16
2020-02-20 21:02:47 Scientists use ML to find an antibiotic able to kill superbugs in mice adventured 438
2020-02-11 17:35:48 Shit – An implementation of Git using POSIX shell kick 814
2020-02-01 19:01:19 HTTP 402: Payment Required jpomykala 224
2020-01-16 15:28:07 Salesforce Sustainability Cloud Becomes Generally Available westurner 1
2020-01-09 07:07:33 Httpx: A next-generation HTTP client for Python tomchristie 462
2020-01-14 06:07:53 BlackRock CEO: Climate Crisis Will Reshape Finance vo2maxer 13
2019-12-29 13:32:58 A lot of complex “scalable” systems can be done with a simple, single C++ server Impossible 398
2019-12-31 10:19:32 Warren Buffett is spending billions to make Iowa 'the Saudi Arabia of wind' corporate_shi11 52
2019-12-27 07:08:54 Scientists Likely Found Way to Grow New Teeth for Patients elorant 243
2019-12-26 13:32:34 Announcing the New PubMed vo2maxer 119
2019-12-25 08:16:17 Ask HN: Is it worth it to learn C in 2020? zabana 11
2019-12-21 07:55:04 Free and Open-Source Mathematics Textbooks vo2maxer 321
2019-12-18 09:24:05 Make CPython segfault in 5 lines of code coolreader18 130
2019-12-10 12:05:36 Applications Are Now Open for YC Startup School – Starts in January erohead 48
2019-12-10 14:37:28 ‘Adulting’ is hard. UC Berkeley has a class for that incomplete 2
2019-12-10 13:55:50 Founder came back after 8 years to rewrite flash photoshop in canvas/WebGL poniko 9
2019-12-09 09:56:35 Five cities account for vast majority of growth in U.S. tech jobs: study Bostonian 93
2019-12-01 12:45:50 Don’t Blame Tech Bros for the Housing Crisis mistersquid 30
2019-11-25 09:07:30 Docker is just static linking for millenials DyslexicAtheist 38
2019-11-14 04:01:54 Show HN: Bamboolib – A GUI for Pandas (Python Data Science) __tobals__ 119
2019-11-25 01:39:22 Battery-Electric Heavy-Duty Equipment: It's Sort of Like a Cybertruck duck 3
2019-11-09 09:26:55 Tools for turning descriptions into diagrams: text-to-picture resources ingve 61
2019-10-16 00:42:33 CSR: Corporate Social Responsibility westurner 2
2019-10-19 08:28:01 GTD Tickler file – a proposal for text file format vivekv 3
2019-10-20 02:07:48 Ask HN: Any suggestion on how to test CLI applications? pdappollonio 3
2019-10-16 00:34:32 The Golden Butterfly and the All Weather Portfolio westurner 1
2019-10-12 07:19:23 Canada's Decision To Make Public More Clinical Trial Data Puts Pressure On FDA pseudolus 192
2019-10-10 23:35:35 Python Alternative to Docker gilad 3
2019-10-09 00:17:45 $6B United Nations Agency Launches Bitcoin, Ethereum Crypto Fund zed88 8
2019-10-08 16:03:02 Timsort, the Python sorting algorithm alexchamberlain 407
2019-10-07 22:29:21 Supreme Court allows blind people to sue retailers if websites aren't accessible justadudeama 743
2019-10-04 11:15:12 Streamlit: Turn a Python script into an interactive data analysis tool danicgross 467
2019-09-23 16:43:51 Scott’s Supreme Quantum Supremacy FAQ xmmrm 600
2019-09-23 18:31:40 Ask HN: How do you handle/maintain local Python environments? PascLeRasc 103
2019-09-23 12:35:51 Is the era of the $100 graphing calculator coming to an end? prostoalex 361
2019-09-23 03:17:17 Reinventing Home Directories Schiphol 118
2019-09-23 03:00:38 Serverless: slower and more expensive kiyanwang 1787
2019-09-22 17:32:04 Entropy can be used to understand systems acgan 3
2019-09-18 07:24:36 New Query Language for Graph Databases to Become International Standard Anon84 290
2019-09-21 13:21:03 A Python Interpreter Written in Python nnnmnten 2
2019-09-21 11:51:00 Reinventing Home Directories – systemd-homed [pdf] signa11 3
2019-09-21 13:08:28 Weld: Accelerating numpy, scikit and pandas as much as 100x with Rust and LLVM unbalancedparen 585
2019-09-19 20:00:14 Craftsmanship–The Alternative to the 4 Hour Work Week oglowo3 4
2019-09-19 09:31:43 Solar and Wind Power So Cheap They’re Outgrowing Subsidies ph0rque 623
2019-09-18 06:52:46 Show HN: Python Tests That Write Themselves timothycrosley 131
2019-09-09 10:52:49 Most Americans see catastrophic weather events worsening elorant 102
2019-09-17 12:00:54 Emergent Tool Use from Multi-Agent Interaction gdb 332
2019-09-17 22:32:25 Inkscape 1.0 Beta 1 nkoren 603
2019-09-08 13:45:57 Where Dollar Bills Come From danso 69
2019-09-05 07:13:24 Monetary Policy Is the Root Cause of the Millennials’ Struggle joshuafkon 52
2019-08-30 15:42:12 Non-root containers, Kubernetes CVE-2019-11245 and why you should care zelivans 8
2019-08-25 23:49:46 How do black holes destroy information and why is that a problem? sohkamyung 195
2019-08-25 09:48:11 Banned C standard library functions in Git source code susam 502
2019-08-25 10:01:30 Ask HN: What's the hardest thing to secure in a web-app? juansgaitan 7
2019-08-22 01:29:43 Crystal growers who sparked a revolution in graphene electronics sohkamyung 85
2019-08-22 16:27:43 Things to Know About GNU Readline matt_d 204
2019-08-22 16:16:41 Show HN: Termpage – Build a webpage that behaves like a terminal brisky 5
2019-08-21 22:49:19 Vimer - Avoid multiple instances of GVim with gvim –remote[-tab]-silent wrapper grepgeek 6
2019-08-22 16:06:27 Electric Dump Truck Produces More Energy Than It Uses mreome 3
2019-08-21 17:34:53 Ask HN: Let's make an open source/free SaaS platform to tackle school forms busymichael 12
2019-08-21 14:18:17 Ask HN: Is there a CRUD front end for databases (especially SQLite)? Tomte 2
2019-08-20 06:43:31 California approves solar-powered EV charging network and electric school buses elorant 15
2019-08-17 10:58:03 You May Be Better Off Picking Stocks at Random, Study Finds Vaslo 146
2019-08-12 08:15:23 Root: CERN's scientific data analysis framework for C++ z3phyr 137
2019-08-13 02:09:30 MesaPy: A Memory-Safe Python Implementation based on PyPy (2018) ospider 119
2019-08-11 16:22:30 Ask HN: Configuration Management for Personal Computer? jacquesm 197
2019-08-08 13:11:06 GitHub Actions now supports CI/CD, free for public repositories dstaheli 680
2019-08-05 17:19:30 The Fed is getting into the Real-Time payments business apo 96
2019-07-08 15:26:38 A Giant Asteroid of Gold Won’t Make Us Richer pseudolus 92
2019-07-08 10:52:06 Abusing the PHP Query String Parser to Bypass IDS, IPS, and WAF lelf 92
2019-06-28 14:23:33 Ask HN: Scripts/commands for extracting URL article text? (links -dump but) WCityMike 1
2019-07-02 11:02:08 NPR's Guide to Hypothesis-Driven Design for Editorial Projects danso 101
2019-06-20 14:56:56 Gryphon: An open-source framework for algorithmic trading in cryptocurrency reso 236
2019-06-21 00:18:36 Wind-Powered Car Travels Downwind Faster Than the Wind J253 5
2019-06-13 19:39:58 NOAA upgrades the U.S. global weather forecast model mehrdadn 214
2019-06-12 08:16:17 A plan to change how Harvard teaches economics carlosgg 116
2019-06-12 17:41:58 The New York Times course to teach its reporters data skills is now open-source espeed 423
2019-06-11 10:21:59 No Kings: How Do You Make Good Decisions Efficiently in a Flat Organization? eugenegamma 743
2019-06-01 23:13:28 4 Years of College, $0 in Debt: How Some Countries Make Education Affordable pseudolus 2
2019-05-26 10:16:10 Ask HN: What jobs can a software engineer take to tackle climate change? envfriendly 67
2019-05-23 12:59:05 YC's request for startups: Government 2.0 simonebrunozzi 194
2019-05-23 13:52:23 Almost 40% of Americans Would Struggle to Cover a $400 Emergency Geeek 112
2019-05-19 16:01:51 Congress should grow the Digital Services budget, it more than pays for itself rmason 68
2019-05-20 01:20:05 The Trillion-Dollar Annual Interest Payment westurner 2
2019-05-15 07:09:29 Oak, a Free and Open Certificate Transparency Log dankohn1 143
2019-05-14 09:36:21 Death rates from energy production per TWh peter_retief 122
2019-05-11 22:37:32 Use links not keys to represent relationships in APIs sarego 342
2019-05-09 23:49:28 No Python in Red Hat Linux 8? jandeboevrie 19
2019-05-06 09:16:47 JMAP: A modern, open email protocol okket 307
2019-05-09 14:51:33 Grid Optimization Competition zeristor 2
2019-05-02 16:11:54 Blockchain's present opportunity: data interchange standardization ivoras 2
2019-04-30 12:45:38 Ask HN: Value of “Shares of Stock options” when joining a startup cdeveloper 5
2019-04-28 13:46:48 CMU Computer Systems: Self-Grading Lab Assignments (2018) georgecmu 205
2019-04-28 14:50:29 Show HN: Debugging-Friendly Tracebacks for Python cknd 121
2019-04-28 07:41:27 Why isn't 1 a prime number? gpvos 273
2019-04-28 07:26:37 How do we know when we’ve fallen in love? (2016) rohmanhakim 157
2019-04-27 21:50:58 Rare and strange ICD-10 codes zdw 68
2019-04-20 15:10:14 Python Requests III maximilianroos 19
2019-04-17 09:43:04 Post-surgical deaths in Scotland drop by a third, attributed to a checklist fanf2 1036
2019-04-17 16:06:09 Apply to Y Combinator dlhntestuser 3
2019-04-02 03:51:50 Trunk-Based Development vs. Git Flow kiyanwang 4
2019-04-01 17:25:58 Ask HN: Anyone else write the commit message before they start coding? xkapastel 25
2019-03-27 03:29:30 Ask HN: Datalog as the only language for web programming, logic and database truth_seeker 21
2019-03-24 19:46:33 The cortex is a neural network of neural networks curtis 297
2019-03-22 21:51:49 Is there a program like codeacademy but for learning sysadmin? tayvz 7
2019-03-22 17:18:44 Maybe You Don't Need Kubernetes ra7 500
2019-03-21 08:04:34 Quantum Machine Appears to Defy Universe’s Push for Disorder biofox 78
2019-03-21 12:45:42 Pytype checks and infers types for your Python code mkesper 4
2019-03-20 21:56:26 How I'm able to take notes in mathematics lectures using LaTeX and Vim tambourine_man 674
2019-03-21 05:18:51 LHCb discovers matter-antimatter asymmetry in charm quarks rbanffy 269
2019-03-21 00:22:37 React Router v5 jsdev93 153
2019-03-15 18:23:21 Experimental rejection of observer-independence in the quantum world lisper 186
2019-03-15 08:14:22 Show HN: A simple Prolog Interpreter written in a few lines of Python 3 photon_lines 148
2019-03-07 17:57:28 How to earn your macroeconomics and finance white belt as a software developer andrenth 307
2019-03-02 14:24:35 Ask HN: Relationship between set theory and category theory fmihaila 4
2019-02-26 11:24:41 The most popular docker images each contain at least 30 vulnerabilities vinnyglennon 562
2019-02-24 22:39:39 Tinycoin: A small, horrible cryptocurrency in Python for educational purposes MrXOR 4
2019-02-20 14:08:47 When does the concept of equilibrium work in economics? dnetesn 54
2019-02-20 22:53:23 Simdjson – Parsing Gigabytes of JSON per Second cmsimike 597
2019-02-18 10:13:02 A faster, more efficient cryptocurrency salvadormon 583
2019-02-17 05:52:11 Git-signatures – Multiple PGP signatures for your commits Couto 75
2019-02-16 06:55:28 Running an LED in reverse could cool future computers ChrisGranger 46
2019-02-06 07:15:56 Compounding Knowledge golyi 481
2019-02-16 14:49:30 Why CISA Issued Our First Emergency Directive ca98am79 211
2019-02-14 23:22:11 Chrome will Soon Let You Share Links to a Specific Word or Sentence on a Page kumaranvpl 359
2019-02-09 12:21:30 Guidelines for keeping a laboratory notebook Tomte 87
2019-02-07 12:03:47 Superalgos and the Trading Singularity ciencias 2
2019-02-07 12:23:44 Crunching 200 years of stock, bond, currency and commodity data chollida1 308
2019-02-06 14:50:35 Show HN: React-Schemaorg: Strongly-Typed Schema.org JSON-LD for React Eyas 16
2019-02-06 16:15:33 Consumer Protection Bureau Aims to Roll Back Rules for Payday Lending pseudolus 197
2019-02-05 01:56:30 Lectures in Quantitative Economics as Python and Julia Notebooks westurner 355
2019-02-04 11:55:50 If Software Is Funded from a Public Source, Its Code Should Be Open Source jrepinc 1138
2019-02-04 23:55:48 Apache Arrow 0.12.0 westurner 1
2019-02-04 23:51:34 Statement on Status of the Consolidated Audit Trail (2018) westurner 1
2019-02-04 20:03:28 U.S. Federal District Court Declared Bitcoin as Legal Money obilgic 12
2019-01-30 12:42:06 Post Quantum Crypto Standardization Process – Second Round Candidates Announced dlgeek 2
2019-01-30 13:59:56 Ask HN: How do you evaluate security of OSS before importing? riyakhanna1983 5
2019-01-30 09:35:47 Ask HN: How can I use my programming skills to support nonprofit organizations? theneck 3
2019-01-29 19:43:16 Ask HN: Steps to forming a company? jxr006 4
2019-01-29 13:48:48 A Self-Learning, Modern Computer Science Curriculum hacknrk 394
2019-01-24 00:34:14 MVP Spec hyperpallium 2
2019-01-21 12:10:37 Can we merge Certificate Transparency with blockchain? fedotovcorp 3
2019-01-21 20:38:23 Why Don't People Use Formal Methods? pplonski86 419
2019-01-20 20:29:25 Steps to a clean dataset with Pandas NicoJuicy 4
2019-01-19 19:38:48 Reahl – A Python-only web framework kim0 165
2019-01-12 19:56:20 Ask HN: How can you save money while living on poverty level? ccdev 8
2019-01-11 14:46:52 A DNS hijacking wave is targeting companies at an almost unprecedented scale Elof 112
2019-01-09 23:09:59 Show HN: Generate dank mnemonic seed phrases in the terminal mofle 3
2019-01-08 15:28:29 Can you sign a quantum state? zdw 3
2019-01-09 18:04:41 Lattice Attacks Against Weak ECDSA Signatures in Cryptocurrencies [pdf] soohyung 11
2019-01-09 12:00:44 REMME – A blockchain-based protocol for issuing X.509 client certificates fedotovcorp 33
2019-01-08 09:51:20 California grid data is live – solar developers take note Osiris30 2
2019-01-05 12:30:30 Why attend predatory colleges in the US? azhenley 3
2018-12-31 15:43:54 Ask HN: Data analysis workflow? tucaz 1
2018-12-28 16:25:15 The U.S. is spending millions to solve mystery sonic attacks on diplomats johnshades 5
2018-12-27 10:00:38 Ask HN: What is your favorite open-source job scheduler bohinjc 6
2018-12-22 06:53:46 How to Version-Control Jupyter Notebooks tosh 164
2018-12-04 10:25:47 Teaching and Learning with Jupyter (A book by Jupyter for Education) westurner 5
2018-11-27 17:48:54 Margin Notes: Automatic code documentation with recorded examples from runtime mpweiher 67
2018-11-24 15:33:08 Time to break academic publishing's stranglehold on research joeyespo 692
2018-11-22 10:32:27 Ask HN: How can I learn to read mathematical notation? cursorial 211
2018-10-18 18:07:59 New law lets you defer capital gains taxes by investing in opportunity zones rmason 88
2018-10-15 19:55:06 How to Write a Technical Paper [pdf] boricensis 360
2018-10-15 15:19:40 JSON-LD 1.0: A JSON-Based Serialization for Linked Data geezerjay 2
2018-10-14 15:30:29 Jeff Hawkins Is Finally Ready to Explain His Brain Research tysone 489
2018-10-12 03:02:01 Interstellar Visitor Found to Be Unlike a Comet or an Asteroid Bootvis 204
2018-10-12 02:15:03 Publishing more data behind our reporting gballan 146
2018-10-10 22:23:44 CSV 1.1 – CSV Evolved (for Humans) polm23 84
2018-10-11 06:42:34 Ask HN: Which plants can be planted indoors and easily maintained? gymshoes 123
2018-10-08 10:23:38 Graduate Student Solves Quantum Verification Problem digital55 267
2018-10-05 07:53:30 The down side to wind power todd8 63
2018-10-05 05:47:19 Thermodynamics of Computation Wiki westurner 2
2018-10-04 09:27:48 Why Do Computers Use So Much Energy? tshannon 220
2018-09-30 22:11:07 Justice Department Sues to Stop California Net Neutrality Law jonburs 201
2018-09-22 10:52:45 White House Drafts Order to Probe Google, Facebook Practices Jerry2 105
2018-09-19 20:37:52 Ask HN: Books about applying the open source model to society kennu 1
2018-09-12 16:02:35 Today, Europe Lost The Internet. Now, We Fight Back DiabloD3 433
2018-09-01 14:13:52 Consumer science (a.k.a. home economics) as a college major guard0g 4
2018-08-28 11:18:26 Facebook vows to run on 100 percent renewable energy by 2020 TamoC 2
2018-08-30 12:51:10 California Moves to Require 100% Clean Electricity by 2045 dsr12 407
2018-08-29 11:15:59 Miami Will Be Underwater Soon. Its Drinking Water Could Go First hourislate 264
2018-08-29 22:50:51 Free hosting VPS for NGO project? vikramjb 1
2018-08-29 12:18:35 The Burden: Fossil Fuel, the Military and National Security westurner 3
2018-08-29 02:27:58 Scientists Warn the UN of Capitalism's Imminent Demise westurner 1
2018-08-28 14:41:52 Firefox Nightly Secure DNS Experimental Results Vinnl 40
2018-08-28 08:31:48 Long-sought decay of Higgs boson observed at CERN chmaynard 243
2018-08-28 09:00:54 Sen. Wyden Confirms Cell-Site Simulators Disrupt Emergency Calls DiabloD3 518
2018-08-23 00:01:34 Building a Model for Retirement Savings in Python koblenski 3
2018-08-20 21:38:10 New E.P.A. Rollback of Coal Pollution Regulations Takes a Major Step Forward yaseen-rob 3
2018-08-20 14:21:22 Researchers Build Room-Temp Quantum Transistor Using a Single Atom jonbaer 3
2018-08-20 10:55:17 New “Turning Tables” Technique Bypasses All Windows Kernel Mitigations yaseen-rob 2
2018-08-19 22:27:20 Um – Create your own man pages so you can remember how to do stuff quickthrower2 646
2018-08-15 04:52:10 Leverage Points: Places to Intervene in a System pjc50 113
2018-08-15 03:46:23 SQLite Release 3.25.0 adds support for window functions MarkusWinand 333
2018-08-15 19:53:03 Update on the Distrust of Symantec TLS Certificates dumpsterkid 3
2018-08-11 07:57:44 The Transport Layer Security (TLS) Protocol Version 1.3 dochtman 255
2018-08-12 08:56:52 Academic Torrents – Making 27TB of research data available jacquesm 1081
2018-08-10 15:19:24 1/0 = 0 ingve 650
2018-08-07 15:43:05 Power Worth Less Than Zero Spreads as Green Energy Floods the Grid bumholio 537
2018-08-05 15:27:39 Kernels, a free hosted Jupyter notebook environment with GPUs benhamner 95
2018-07-22 14:16:25 Solar and wind are coming. And the power sector isn’t ready spenrose 174
2018-07-11 13:15:47 Solar Just Hit a Record Low Price in the U.S toomuchtodo 456
2018-07-10 23:53:58 Causal Inference Book luu 104
2018-07-02 10:18:14 Tim Berners-Lee is working a platform designed to re-decentralize the web rapnie 36
2018-07-01 06:49:08 More States Opting to 'Robo-Grade' Student Essays by Computer happy-go-lucky 44
2018-07-02 07:26:28 Ask HN: Looking for a simple solution for building an online course r4victor 57
2018-06-30 15:45:56 There is now a backprop principle for deep learning on quantum computers GVQ 3
2018-06-30 21:03:36 New research a ‘breakthrough for large-scale discrete optimization’ new_guy 96
2018-06-29 23:17:31 Wind, solar farms produce 10% of US power in the first four months of 2018 toomuchtodo 85
2018-06-25 16:57:46 FDA approves first marijuana-derived drug and it may spark DEA rescheduling mikece 150
2018-06-21 10:22:43 States Can Require Internet Tax Collection, Supreme Court Rules uptown 541
2018-06-18 08:26:23 William Jennings Bryan’s “Cross of Gold” Speech zjacobi 71
2018-06-17 18:13:13 Ask HN: Do you consider yourself to be a good programmer? type0 27
2018-06-17 11:00:59 Handles are the better pointers ingve 194
2018-06-14 14:13:13 Neural scene representation and rendering johnmoberg 540
2018-06-17 20:19:20 New US Solar Record – 2.155 Cents per KWh prostoalex 4
2018-06-10 18:04:07 Ask HN: Is there a taxonomy of machine learning types? ljw1001 3
2018-05-22 16:22:43 Senator requests better https compliance at US Department of Defense [pdf] anigbrowl 168
2018-05-22 23:15:18 Banks Adopt Military-Style Tactics to Fight Cybercrime petethomas 3
2018-04-12 13:13:10 No, Section 230 Does Not Require Platforms to Be “Neutral” panarky 6
2018-04-11 14:28:06 Ask HN: Do battery costs justify “buy all sell all” over “net metering”? westurner 1
2018-04-09 21:17:43 Portugal electricity generation temporarily reaches 100% renewable mgdo 234
2018-04-06 19:16:25 GPU Prices Drop ~25% in March as Supply Normalizes merqurio 2
2018-04-09 23:51:08 Apple says it’s now powered by renewable energy worldwide iamspoilt 272
2018-03-18 13:13:15 Hackers Are So Fed Up with Twitter Bots They’re Hunting Them Down Themselves CrankyBear 271
2018-03-02 08:21:41 “We’re committing Twitter to increase the health and civility of conversation” dankohn1 147
2018-03-01 02:06:42 Gitflow – Animated in React v33ra 3
2018-02-28 22:06:35 Ask HN: How feasible is it to become proficient in several disciplines? diehunde 4
2018-02-27 09:47:40 After rising for 100 years, electricity demand is flat aaronbrethorst 629
2018-02-27 10:37:54 A framework for evaluating data scientist competency schaunwheeler 3
2018-02-27 18:28:01 Levi Strauss to use lasers instead of people to finish jeans e2e4 3
2018-02-27 18:24:45 Chaos Engineering: the history, principles, and practice austingunter 2
2018-02-27 09:52:39 Scientists use an atomic clock to measure the height of a mountain montrose 45
2018-02-27 18:10:10 Resources to learn project management best practices? chuie 1
2018-02-22 15:35:51 Ask HN: Thoughts on a website-embeddable, credential validating service? estroz 28
2018-02-21 05:03:58 Ask HN: What's the best algorithms and data structures online course? zabana 272
2018-02-20 15:14:40 Using Go as a scripting language in Linux neoasterisk 8
2018-02-18 12:09:07 Guidelines for enquiries regarding the regulatory framework for ICOs [pdf] paulsutter 23
2018-02-16 00:16:09 The Benjamin Franklin method for learning more from programming books nancyhua 566
2018-02-10 20:41:21 Avoiding blackouts with 100% renewable energy ramonvillasante 2
2018-02-10 11:25:54 Ask HN: What are some common abbreviations you use as a developer? yagamidev 3
2018-02-09 19:42:21 There Might Be No Way to Live Comfortably Without Also Ruining the Planet SirLJ 43
2018-02-08 22:52:44 Multiple GWAS finds 187 intelligence genes and role for neurogenesis/myelination gwern 2
2018-02-08 20:33:49 Could we solve blockchain scaling with terabyte-sized blocks? gwern 4
2018-02-07 20:50:24 Ask HN: Do you have ADD/ADHD? How do you manage it? vumgl 4
2018-02-03 14:36:02 Ask HN: How to understand the large codebase of an open-source project? maqbool 186
2018-02-03 13:56:30 What is the best way to learn to code from absolute scratch? eliotpeper 8
2018-02-02 04:35:58 Tesla racing series: Electric cars get the green light – Roadshow rbanffy 77
2018-02-02 13:40:19 What happens if you have too many jupyter notebooks? tvorogme 4
2018-02-01 00:49:46 Cancer ‘vaccine’ eliminates tumors in mice jv22222 942
2018-02-01 12:23:08 Boosting teeth’s healing ability by mobilizing stem cells in dental pulp digital55 306
2018-01-29 17:11:55 This Biodegradable Paper Donut Could Let Us Reforest the Planet westurner 2
2018-01-29 16:44:35 Drones that can plant 100k trees a day artsandsci 147
2018-01-27 22:21:28 What are some YouTube channels to progress into advanced levels of programming? altsyset 41
2018-01-25 17:41:24 Multiple issue and pull request templates clarkbw 17
2018-01-25 17:38:38 Five myths about Bitcoin’s energy use nvk 10
2018-01-23 18:41:16 Ask HN: Which programming language has the best documentation? siquick 3
2018-01-18 06:36:07 Ask HN: Recommended course/website/book to learn data structure and algorithms strikeX 3
2018-01-19 17:06:07 Why is quicksort better than other sorting algorithms in practice? isp 5
2018-01-18 16:16:16 ORDO: a modern alternative to X.509 juancampa 1
2018-01-18 11:47:03 Wine 3.0 Released etiam 724
2018-01-18 19:51:30 Kimbal Musk is leading a $25M mission to fix food in US schools rmason 2
2018-01-13 21:42:47 Spinzero – A Minimal Jupyter Notebook Theme neilpanchal 5
2018-01-11 13:27:17 What does the publishing industry bring to the Web? mpweiher 2
2018-01-10 14:02:09 Git is a blockchain Swizec 13
2018-01-07 12:06:03 Show HN: Convert Matlab/NumPy matrices to LaTeX tables tpaschalis 4
2018-01-02 10:48:10 A Year of Spaced Repetition Software in the Classroom misiti3780 4
2017-12-27 08:32:39 NIST Post-Quantum Cryptography Round 1 Submissions sohkamyung 130
2018-01-01 21:38:58 What are some good resources to learn about Quantum Computing? nmehta21 3
2017-12-29 15:53:06 Gridcoin: Rewarding Scientific Distributed Computing trueduke 134
2017-12-26 12:37:07 Power Prices Go Negative in Germany kwindla 485
2017-12-21 14:30:35 Mathematicians Find Wrinkle in Famed Fluid Equations digital55 240
2017-12-20 10:43:31 Bitcoin is an energy arbitrage js4 51
2017-12-19 17:03:30 There are now more than 200k pending Bitcoin transactions OyoKooN 192
2017-12-17 22:16:06 What ORMs have taught me: just learn SQL (2014) ausjke 540
2017-12-17 07:32:06 Show HN: An educational blockchain implementation in Python jre 412
2017-12-16 08:12:44 MSU Scholars Find $21T in Unauthorized Government Spending sillypuddy 137
2017-12-13 04:59:42 Universities spend millions on accessing results of publicly funded research versteegen 624
2017-12-11 19:49:44 An Interactive Introduction to Quantum Computing kevlened 254
2017-12-12 12:34:46 Quantum attacks on Bitcoin, and how to protect against them (ECDSA, SHA256) westurner 2
2017-12-10 17:50:44 Project Euler vinchuco 792
2017-12-12 10:17:39 Who’s Afraid of Bitcoin? The Futures Traders Going Short thisisit 54
2017-12-11 19:21:38 Statement on Cryptocurrencies and Initial Coin Offerings corbinpage 811
2017-12-11 15:02:04 Ask HN: How do you stay focused while programming/working? flipfloppity 83
2017-12-08 10:53:49 A Hacker Writes a Children's Book arthurjj 171
2017-12-11 18:17:52 Ask HN: Do ISPs have a legal obligation to not sell minors' web history anymore? westurner 2
2017-12-11 11:58:38 Tech luminaries call net neutrality vote an 'imminent threat' kjhughes 279
2017-12-06 18:55:25 Ask HN: Can hashes be replaced with optimization problems in blockchain? pacavaca 3
2017-12-01 01:19:43 Ask HN: What could we do with all the mining power of Bitcoin? Fold Protein? sova 3
2017-12-03 20:14:58 No CEO needed: These blockchain platforms will let ‘the crowd’ run startups maxwellnardi 4
2017-12-04 04:59:08 How much energy does Bitcoin mining really use? trueduke 3
2017-12-02 00:27:40 The Actual FCC Net Neutrality Repeal Document. TLDR: Read Pages 82-87 [pdf] croatoan 3
2017-12-01 21:55:26 The 5 most ridiculous things the FCC says in its new net neutrality propaganda pulisse 164
2017-12-01 13:15:47 FCC's Pai, addressing net neutrality rules, calls Twitter biased joeyespo 13
2017-12-01 05:49:25 A curated list of Chaos Engineering resources dastergon 51
2017-12-01 11:24:06 Technology behind Bitcoin could aid science, report says digital55 13
2017-11-30 15:07:26 Git hash function transition plan vszakats 215
2017-11-30 22:04:20 Vintage Cray Supercomputer Rolls Up to Auction ohjeez 3
2017-11-30 21:21:09 Google is officially 100% sun and wind powered – 3.0 gigawatts worth rippsu 163
2017-11-29 12:29:30 Interactive workflows for C++ with Jupyter SylvainCorlay 292
2017-11-28 16:01:32 Vanguard Founder Jack Bogle Says ‘Avoid Bitcoin Like the Plague’ dionmanu 105
2017-11-29 11:22:54 Nasdaq Plans to Introduce Bitcoin Futures knwang 416
2017-11-28 17:49:07 Ask HN: Where do you think Bitcoin will be by 2020? rblion 10
2017-11-28 18:03:11 Ask HN: Why would anyone share trading algorithms and compare by performance? westurner 1
2017-11-25 06:28:39 Ask HN: CS papers for software architecture and design? avrmav 513
2017-11-15 10:24:27 Keeping a Lab Notebook [pdf] Tomte 327
2017-10-28 08:12:53 How to teach technical concepts with cartoons Tomte 170
2017-10-22 16:43:03 Fact Checks fanf2 126
2017-10-19 05:51:13 DHS orders agencies to adopt DMARC email security puppetmaster30 2
2017-10-18 21:20:00 The electricity for 1BTC trade could power a house for a month niyikiza 25
2017-10-19 05:20:26 PAC Fundraising with Ethereum Contracts? westurner 1
2017-10-19 05:16:25 SolarWindow Completes Financing ($2.5m) westurner 2
2017-10-16 12:48:08 Here’s what you can do to protect yourself from the KRACK WiFi vulnerability tdrnd 2
2017-10-14 12:41:29 The Solar Garage Door – A Possible Alternative to the Emergency Generator curtis 2
2017-10-14 07:34:07 Using the Web Audio API to Make a Modem maaaats 307
2017-10-11 18:25:17 Ask HN: How to introduce someone to programming concepts during 12-hour drive? nkkollaw 9
2017-09-27 01:24:13 American Red Cross Asks for Ham Radio Operators for Puerto Rico Relief Effort kw71 346
2017-09-26 14:58:38 Technical and non-technical tips for rocking your coding interview duck 259
2017-09-23 12:12:36 Django 2.0 alpha orf 156
2017-09-24 00:15:28 Ask HN: What is the best way to spend my time as a 17-year-old who can code? jmeyer2k 161
2017-09-21 14:18:33 Democrats fight FCC's plans to redefine “broadband” from 25+ to 10+ Mbps gnicholas 18
2017-09-17 12:49:37 Ask HN: Any detailed explanation of computer science smithmayowa 2
2017-09-16 18:40:33 Ask HN: What algorithms should I research to code a conference scheduling app viertaxa 55
2017-09-15 05:51:45 What have been the greatest intellectual achievements? Gormisdomai 42
2017-09-15 23:22:02 Ask HN: What can't you do in Excel? (2017) danso 37
2017-09-08 20:04:36 Open Source Ruling Confirms Enforceability of Dual-Licensing and Breach of GPL t3f 116
2017-09-01 11:27:30 Elon Musk Describes What Great Communication Looks Like endswapper 90
2017-09-01 04:05:12 Great Ideas in Theoretical Computer Science tu7001 290
2017-08-28 16:06:24 Ask HN: How do you, as a developer, set measurable and actionable goals? humaninstrument 24
2017-08-26 16:06:24 Bitcoin Energy Consumption Index schwabacher 256
2017-08-26 09:59:19 Dancing can reverse the signs of aging in the brain brahmwg 71
2017-08-26 09:03:19 Rumours swell over new kind of gravitational-wave sighting indescions_2017 258
2017-08-20 12:56:37 New Discovery Simplifies Quantum Physics wolfgke 2
2017-08-23 03:22:00 OpenAI has developed new baseline tool for improving deep reinforcement learning grey_shirts 3
2017-08-24 23:19:03 The prior can generally only be understood in the context of the likelihood selimthegrim 94
2017-08-22 04:13:00 Ask HN: How to find/compare trading algorithms with Quantopian? westurner 3
2017-08-22 04:09:17 Ask HN: How do IPOs and ICOs help a business raise capital? westurner 2
2017-08-22 04:02:04 Solar Window coatings “outperform rooftop solar by 50-fold” westurner 4
2017-08-21 23:30:16 MS: Bitcoin mining uses as much electricity as 1M US homes pulisse 79
2017-08-15 15:45:47 Ask HN: What are your favorite entrepreneurship resources brianbreslin 13
2017-05-09 12:59:38 CPU Utilization is Wrong dmit 624
2017-05-06 17:13:03 Ask HN: Can I use convolutional neural networks to clasify videos on a CPU Faizann20 1
2017-05-01 10:17:36 Esoteric programming paradigms SlyShy 397
2017-04-27 04:41:09 gRPC-Web: Moving past REST+JSON towards type-safe Web APIs bestan 329
2017-04-16 03:59:55 Reasons blog posts can be of higher scientific quality than journal articles vixen99 233
2017-04-07 12:50:38 Fact Check now available in Google Search and News fouadmatin 302
2017-04-07 20:07:05 Ask HN: Is anyone working on CRISPR for happiness? arikr 4
2017-03-26 14:58:59 Roadmap to becoming a web developer in 2017 miguelarauj1o 4
2017-03-20 19:14:10 Beautiful Online SICP Dangeranger 762
2017-03-19 11:52:48 Ask HN: How do you keep track/save your learnings?(so that you can revisit them) mezod 4
2017-03-11 13:26:30 Ask HN: Criticisms of Bayesian statistics? muraiki 1
2017-01-16 18:53:09 80,000 Hours career plan worksheet BreakoutList 230
2017-01-07 18:27:31 World's first smartphone with a molecular sensor is coming in 2017 walterbell 19
2016-12-31 12:11:14 Ask HN: How would one build a business that only develops free software? anondon 12
2016-12-29 00:40:11 Ask HN: If your job involves continually importing CSVs, what industry is it? iamwil 12
2016-12-09 17:21:13 Ask HN: Maybe I kind of suck as a programmer – how do I supercharge my work? tastyface 328
2016-11-20 06:33:34 Ask HN: Anything Like Carl Sagan's Cosmos for Computer Science? leksak 32
2016-11-20 10:32:00 Learn X in Y minutes anonu 161
2016-11-03 05:46:50 Org mode 9.0 released Philipp__ 285
2016-11-13 00:23:33 Ask HN: Best Git workflow for small teams tmaly 166
2016-11-10 15:46:57 TDD Doesn't Work narfz 153
2016-11-07 14:13:48 C for Python programmers (2011) bogomipz 314
2016-10-26 02:19:06 Ask HN: How do you organise/integrate all the information in your life? tonteldoos 323
2016-10-23 14:06:00 Ask HN: What are the best web tools to build basic web apps as of October 2016? arikr 114
2016-10-16 10:55:18 Harvard and M.I.T. Are Sued Over Lack of Closed Captions lsh123 45
2016-10-06 11:15:16 Jack Dorsey Is Losing Control of Twitter miraj 283
2016-09-18 09:09:04 Schema.org: Mission, Project, Goal, Objective, Task westurner 49
2016-09-18 08:59:41 This week is #GlobalGoals week (and week of The World's Largest Lesson) westurner 1
2016-08-19 08:12:25 The Open Source Data Science Masters nns 95
2016-07-29 06:08:29 We Should Not Accept Scientific Results That Have Not Been Repeated dnetesn 910
2016-05-30 07:39:05 The SQL filter clause: selective aggregates MarkusWinand 138
2016-05-29 23:36:23 Ask HN: What do you think about the current education system? alejandrohacks 36
2016-05-10 08:55:01 A Reboot of the Legendary Physics Site ArXiv Could Shape Open Science tonybeltramelli 174
2014-03-23 14:27:04 Principles of good data analysis gjreda 108
2014-03-11 08:16:38 Why Puppet, Chef, Ansible aren't good enough iElectric2 362
2014-03-11 20:12:16 Python vs Julia – an example from machine learning ajtulloch 170
2014-02-17 10:23:21 Free static page hosting on Google App Engine in minutes fizerkhan 95
2014-02-03 09:15:30 “Don’t Reinvent the Wheel, Use a Framework” They All Say mogosselin 79
2013-09-09 10:20:50 IPython in Excel vj44 73
2013-08-11 01:56:12 PEP 450: Adding A Statistics Module To The Standard Library petsos 185
2013-08-02 21:03:51 Functional Programming with Python llambda 107
2013-08-01 10:59:55 PEP 8 Modernisation tristaneuan 213
2013-07-15 12:40:04 Useful Unix commands for data science gjreda 221
2013-07-13 11:35:40 The data visualization community needs its own Hacker News ejfox 11
2013-07-06 08:59:22 Ask HN: Intermediate Python learning resources? jesusx 113
2013-07-03 08:00:50 Ansible Simply Kicks Ass hunvreus 185
2013-06-29 05:44:08 Python-Based Tools for the Space Science Community neokya 76
2013-05-04 21:21:29 Debian 7.0 "Wheezy" released sciurus 428
2013-05-04 10:40:20 Big-O Algorithm Complexity Cheat Sheet ashleyblackmore 520
2013-05-03 22:32:14 JSON API steveklabnik 227
2013-05-04 14:04:39 Norton Ghost discontinued ruchirablog 42

Items^

[-]

Graph of Keybase commits pre and post Zoom acquisition

0des | 2021-10-09 19:15:34 | 177 | # | ^
[+]
[+]
[+]
[+]

FWIU, Cyph does Open Source E2E chat, files, and unlimited length social posts to circles or to public; but doesn't yet do encrypted git repos that can be solved with something like git-crypt. https://github.com/cyph/cyph

It would be wasteful to throw away the Web of Trust (people with handles to keys) that everyone entered into Keybase. Hopefully, Zoom will consider opening up the remaining pieces of Keybase if not just spinning the product back out to a separate entity?

From https://news.ycombinator.com/item?id=19185998 https://westurner.github.io/hnlog/#comment-19185998 :

> There's also "Web Key Directory"; which hosts GPG keys over HTTPS from a .well-known URL for a given user@domain identifier: https://wiki.gnupg.org/WKD

> GPG presumes secure key distribution

> Compared to existing PGP/GPG keyservers [HKP], WKD does rely upon HTTPS.

Blockcerts can be signed when granted to a particular identity entity:

> Here are the open sources of blockchain-certificates/cert-issuer and blockchain-certificates/cert-verifier-js: https://github.com/blockchain-certificates

CT Certificate Transparency logs for key grants and revocations may depend upon a centralized or a decentralized Merkleized datastore: https://en.wikipedia.org/wiki/Certificate_Transparency

How do I specify the correct attributes of my schema.org/Person record (maybe on my JAMstack site) in order to approximate the list of identities that e.g. Keybase lets one register and refer to a cryptographic proof of?

Do I generate a W3C DID and claim my identities by listing them in a JSON-LD document signed with W3C ld-proofs (ld-signatures)? Which of the key directory and Web of Trust features of Keybase are covered by existing W3C spec Use Cases?

From https://news.ycombinator.com/item?id=28701355:

> "Use Cases and Requirements for Decentralized Identifiers" https://www.w3.org/TR/did-use-cases/

>> 2. Use Cases: Online shopper, Vehicle assemblies, Confidential Customer Engagement, Accessing Master Data of Entities, Transferable Skills Credentials, Cross-platform User-driven Sharing, Pseudonymous Work, Pseudonymity within a supply chain, Digital Permanent Resident Card, Importing retro toys, Public authority identity credentials (eIDAS), Correlation-controlled Services

> And then, IIUC W3C Verifiable Credentials / ld-proofs can be signed with W3C DID keys - that can also be generated or registered centrally, like hosted wallets or custody services. There are many Use Cases for Verifiable Credentials: https://www.w3.org/TR/vc-use-cases/ :

>> 3. User Needs: Education, Retail, Finance, Healthcare, Professional Credentials, Legal Identity, Devices

>> 4. User Tasks: Issue Claim, Assert Claim, Verify Claim, Store / Move Claim, Retrieve Claim, Revoke Claim

>> 5. Focal Use Cases: Citizenship by Parentage, Expert Dive Instructor, International Travel with Minor and Upgrade

>> 6. User Sequences: How a Verifiable Credential Might Be Created, How a Verifiable Credential Might Be Used

Is there an ACME-like thing to verify online identity control like Keybase still does?

Hopefully, Zoom will consider opening up the remaining pieces of Keybase if not just spinning the product back out to a separate entity?

[-]

Startup Ideas

luu | 2021-10-05 07:30:30 | 222 | # | ^
[+]

IIUC, in 2021, you can dock a PineTab or a PinePhone with a USB-C PD hub that has HDMI, USB, and Ethernet and use any of a number of Linux Desktop operating systems on a larger screen with full size keyboard and mouse.

The PineTab has a backlit keyboard and IIUC the PinePhone has a keyboard & aux battery case that doesn't yet also include the fingerprint sensor or wireless charging. https://www.pine64.org/blog/

[-]

It is easier to educate a Do-er than to motivate the educated

tosh | 2021-10-05 13:15:42 | 446 | # | ^

~ "Imagine that one could give you a copy of all of their knowledge. If you do not choose to apply and learn on your own, you can never."

This is about regimen, this is about stamina, this is about sticktoitiveness; and if you don't want it, you don't need it, you'll never. And I mean never.

The Grit article on Wikipedia mentions persistence and tenacity and stick-to-it-tiveness as roughly synonymous; and that grit may not be that distinct from other Big Five personality traits, but we're not about to listen to that, we're not going with that, because Grit is predictor of success. https://en.wikipedia.org/wiki/Grit_(personality_trait)

To the original point,

> In psychology, grit is a positive, non-cognitive trait based on an individual's perseverance of effort combined with the passion for a particular long-term goal or end state (a powerful motivation to achieve an objective). This perseverance of effort promotes the overcoming of obstacles or challenges that lie on the path to accomplishment and serves as a driving force in achievement realization. Distinct but commonly associated concepts within the field of psychology include "perseverance", "hardiness", "resilience", "ambition", "need for achievement" and "conscientiousness". These constructs can be conceptualized as individual differences related to the accomplishment of work rather than talent or ability.

[-]

Are software engineering “best practices” just developer preferences?

[+]
[+]

Critical systems: https://en.wikipedia.org/wiki/Critical_system :

> There are four types of critical systems: safety critical, mission critical, business critical and security critical.

Safety-critical systems > "Software engineering for safety-critical systems" https://en.wikipedia.org/wiki/Safety-critical_system#Softwar... :

> By setting a standard for which a system is required to be developed under, it forces the designers to stick to the requirements. The avionics industry has succeeded in producing standard methods for producing life-critical avionics software. Similar standards exist for industry, in general, (IEC 61508) and automotive (ISO 26262), medical (IEC 62304) and nuclear (IEC 61513) industries specifically. The standard approach is to carefully code, inspect, document, test, verify and analyze the system. Another approach is to certify a production system, a compiler, and then generate the system's code from specifications. Another approach uses formal methods to generate proofs that the code meets requirements.[11] All of these approaches improve the software quality in safety-critical systems by testing or eliminating manual steps in the development process, because people make mistakes, and these mistakes are the most common cause of potential life-threatening errors.

awesome-safety-critical lists very many resources for safety critical systems: https://awesome-safety-critical.readthedocs.io/en/latest/

There are many ['Engineering'] certification programs for software and other STEM fields. One test to qualify applicants does not qualify as a sufficient set of controls for safety critical systems that must be resilient, fault-tolerant, and redundant.

A real Engineer knows that there are insufficient process controls from review of very little documentation; it's just process wisdom from experience. An engineer starts with this premise: "There are insufficient controls to do this safely" because [test scenario parameter set n] would result in the system state - the output of probably actually a complex nonlinear dynamic system - being unacceptable: outside of acceptable parameters for safe operation.

Are there [formal] Engineering methods that should be requisite to "Computer Science" degrees? What about "Applied Secure Coding Practices in [Language]"? Is that sufficient to teach theory and formal methods?

From "How We Proved the Eth2 Deposit Contract Is Free of Runtime Errors" https://news.ycombinator.com/item?id=28513922 :

>> From "Discover and Prevent Linux Kernel Zero-Day Exploit Using Formal Verification" https://news.ycombinator.com/item?id=27442273 :

>> [Coq, VST, CompCert]

>> Formal methods: https://en.wikipedia.org/wiki/Formal_methods

>> Formal specification: https://en.wikipedia.org/wiki/Formal_specification

>> Implementation of formal specification: https://en.wikipedia.org/wiki/Anti-pattern#Software_engineer...

>> Formal verification: https://en.wikipedia.org/wiki/Formal_verification

>> From "Why Don't People Use Formal Methods?" https://news.ycombinator.com/item?id=18965964 :

>>> Which universities teach formal methods?

>>> - q=formal+verification https://www.class-central.com/search?q=formal+verification

>>> - q=formal+methods https://www.class-central.com/search?q=formal+methods

>>> Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs? https://news.ycombinator.com/item?id=28513922

From "Ask HN: Is it worth it to learn C in 2020?" https://news.ycombinator.com/item?id=21878372 :

> There are a number of coding guidelines e.g. for safety-critical systems where bounded running time and resource consumption are essential. These coding guidelines and standards are basically only available for C, C++, and Ada.

awesome-safety-critical > Software safety standards: https://awesome-safety-critical.readthedocs.io/en/latest/#so...

awesome-safety-critical > Coding Guidelines: https://awesome-safety-critical.readthedocs.io/en/latest/#co...

[-]

Major Quantum Computing Strategy Suffers Serious Setbacks

[+]
[+]
[+]

"Quantized Majorana conductance not actually observed within indium antimonide nanowires"

"Quantum qubit substrate found to be apparently insufficient" (Given the given methods and probably available resources)

And then - in an attempt to use terminology from Constructor Theory https://en.m.wikipedia.org/wiki/Constructor_theory :

> In constructor theory, a transformation or change is described as a task. A constructor is a physical entity which is able to carry out a given task repeatedly. A task is only possible if a constructor capable of carrying it out exists, otherwise it is impossible. To work with constructor theory everything is expressed in terms of tasks. The properties of information are then expressed as relationships between possible- and impossible tasks. Counterfactuals are thus fundamental statements and the properties of information may be described by physical laws.[4] If a system has a set of attributes, the set of permutations of these attributes is seen as a set of tasks. A computation medium is a system whose attributes permute to always produce a possible task. The set of permutations, and hence of tasks, is a computation set. If it is possible to copy the attributes in the computation set, the computation medium is also an information medium.

> Information, or a given task, does not rely on a specific constructor. Any suitable constructor will serve. This ability of information to be carried on different physical systems or media is described as interoperability, and arises as the principle that the combination of two information media is also an information medium.[4] Media capable of carrying out quantum computations are called superinformation media, and are characterised by specific properties. Broadly, certain copying tasks on their states are impossible tasks. This is claimed to give rise to all the known differences between quantum and classical information.[4]

"Subsequent attempts to reproduce [Quantized Majorana conductance (topological qubits of arranged electrons) within indium antimonide nanowires] eventually as a (quantum) computation medium for the given tasks failed"

"Quantum computation by Majorana zero-mode (MZM) quasiparticles in indium antimonide nanowires not actually apparently possible"

... "But what about in DDR5?" Which leads us to a more generally interesting: "Rowhammer for qubits", which is already an actual Quantum on Silicon (QoS) thing.

[-]

Attempts to scientifically “rationalize” policy may be damaging democracy

First, not having read the article:

#EvidenceBasedPolicy is a worthwhile objective even if only because the alternative is to just blow money without measuring ROI at all [because government expenditures are the actual key to feeding the beast, the economic beast, the...].

What are some examples of policy failures where Systematic review and Meta-analysis could have averted loss, harms, waste, catastrophe, long-term costs? Is that cherry picking? The other times we can just throw a dart and that's better than, ahem, these idiots we afford trying to do science?

Wouldn't it be fair to require that constituent ScholarlyArticles (and other CreativeWorks) be kept on file with e.g. the Library of Congress?

Non-federal governments usually have very similar IT and science policy review needs. Should adapting one system for non-federal governments be more complex than specifying a different String or URL in the token_name field in a transaction?

When experts review ScholarlyArticles on our behalf, they should share their structured and unstructured annotations in such a way that their cryptographically signed reviews - and highlights to identify and extract structured facts like summary statistics like sample size and IRB-reviewed study controls - become part of a team-focused collaborative systematic meta-analysis that is kept on file and regularly reviewed in regards to e.g. retractions, typical cognitive biases, failures in experimental design and implementation, and general insufficiencies that should cause us to re-evaluate our beliefs given all available information which meets our established inclusion criteria.

We have a process for peer review of PDFs - and hopefully datasets with locality for reproducibility and unitarity which purportedly helps us work through something like this sequence:

Data / Information / Knowledge / Experience / Wisdom

We often have gaps in our processes to support such progress in developing wisdom from knowledge that should be predicated upon sound information and data and then experience, bias, creeps in.

Basic principles restricting the powers of the government should prevent the government - us, we - from specifically violating the protected rights of persons; but we have allowed "Science" to cloud our judgement in application of our most basic principles of justice - i.e. Life, Liberty, and the pursuit of Happiness; and Equality and Equitability - and should we chalk the unintended consequences up to ignorance or malice?

More science all around: more Data Literacy - awareness of how many bad statistical claims are made all day around the world everywhere - is good and necessary and essential to Media Literacy, which is how we would be forming our opinions if we didn't have better tools for truth and belief for science.

"What does it mean to know?" etc.

Logic, Inference, Reasoning and Statistics probably predicated upon classical statistical mechanics are supposed to bring us closer to knowing: to bring our beliefs closer to the most widely observed truths.

Which Verifiable Claims do we trust? What studies do we admit into our personal and community meta-analyses according to our shared inclusion criteria?

"Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)" is one standard for meta-analyses, for example. http://www.prisma-statement.org/ . Could the bad guys or the dumb good guys lie with that control in place, too? Can knowing our rights - and upholding oaths to uphold values - protect us from meta-analytical group failure?

Perhaps STEM (Science, Technology, Engineering, art, and Medicine/Math) majors and other interested parties can help develop solutions for #EvidenceBasedPolicy?

This one fell flat. Maybe it was the time of day? The question should be asked every year, at least, eh? "Ask HN: Systems for supporting Evidence-Based Policy?" https://news.ycombinator.com/item?id=22920613

>> What tools and services would you recommend for evidence-based policy tasks like meta-analysis, solution criteria development, and planned evaluations according to the given criteria?

>> Are they open source? Do they work with linked open data?

> I suppose I should clarify that citizens, consumers, voters, and journalists are not acceptable answers

"#LinkedMetaAnalyses", "#StructuredPremises"; Ctrl-F "linkedmeta", "linkedrep", "#LinkedResearch": https://westurner.github.io/hnlog/

Alright, my fair biases disclosed, on to the reading the actual article: /1

[-]

Response to 'Call for Review: Decentralized Identifiers (DIDs) v1.0'

[+]
[+]
[+]

> Somebody introduces a new technology to address these concerns every couple years and it doesn't go anywhere. These aren't actually problems to a lot of users.

"Use Cases and Requirements for Decentralized Identifiers" https://www.w3.org/TR/did-use-cases/

> 2. Use Cases: Online shopper, Vehicle assemblies, Confidential Customer Engagement, Accessing Master Data of Entities, Transferable Skills Credentials, Cross-platform User-driven Sharing, Pseudonymous Work, Pseudonymity within a supply chain, Digital Permanent Resident Card, Importing retro toys, Public authority identity credentials (eIDAS), Correlation-controlled Services

And then, IIUC W3C Verifiable Credentials / ld-proofs can be signed with W3C DID keys - that can also be generated or registered centrally, like hosted wallets or custody services. There are many Use Cases for Verifiable Credentials: https://www.w3.org/TR/vc-use-cases/ :

> 3. User Needs: Education, Retail, Finance, Healthcare, Professional Credentials, Legal Identity, Devices

> 4. User Tasks: Issue Claim, Assert Claim, Verify Claim, Store / Move Claim, Retrieve Claim, Revoke Claim

> 5. Focal Use Cases: Citizenship by Parentage, Expert Dive Instructor, International Travel with Minor and Upgrade

> 6. User Sequences: How a Verifiable Credential Might Be Created, How a Verifiable Credential Might Be Used

IIRC DHS funded some of the W3C DID and Verified Credentials specification efforts. See also: https://news.ycombinator.com/item?id=26758099

There's probably already a good way to bridge between sub-SKU GS1 schema.org/identifier on barcodes and QR codes and with DIDs. For GS1, you must register a ~namespace prefix and then you can use the rest of the available address space within the barcode or QR code IIUC.

DIDs can replace ORCIDs - which you can also just generate a new one of - for academics seeking to group their ScholarlyArticles by a better identifier than a transient university email address.

The new UUID formats may or may not be optionally useful in conjunction with W3C DID, VC, and Verifiable News, etc. https://news.ycombinator.com/item?id=28088213

When would a DID be a better choice than a UUID?

[-]

Apple didn't revolutionize power supplies; new transistors did (2012)

[+]
[+]

All brands should put USB-A and USB-C ports on the power brick.

[-]

What does my engineering manager do all day?

[+]
[+]

> - many meetings can be replaced by an update email

Highlights from the feed(s); GitLab has the better activity view IMHO but I haven't tried the new GitHub Issues beta yet.

3 questions from 5 minute Stand-Up Meetings (because everyone's actually standing there trying to leave) for Digital Stand Up Meetings: Since, Before, Obstacles:

  ## 2021-09-28
  ### @teammembername
  #### Since
  #### Before
  #### Obstacles
Since: What have you done since last reporting back? Before: What do you plan to do before our next meeting? Obstacles: What needs which other team resources in order to solve the obstacles?

You can do cool video backgrounds for any video conferencing app with pipewire.

You can ask team members to prep a .txt with their 3 questions and drop it in the chat such that the team can reply to individual #fragments of your brief status report / continued employment justification argument

> - decisions often work better through docs + feedback than big meetings

SO, ah, asynchronous communication doesn't require transcripting for the "Leader Assistant" that does the Mando quarterly minutes from the team chat logs, at least

6 Patterns of Collaboration: GRCOEB: Generate, Reduce, Clarify, Organize, Evaluate, Build Consensus [Six Patterns]; voting on specific Issues, and ideally Chat - [x] lineitems, and https://schema.org/SocialMediaPosting with emoji reactions

[Six Patterns]: http://wrdrd.github.io/docs/consulting/team-building#six-pat... , Text Templates, Collaboration Checklist: Weighted Criteria, Ranked-choice Voting.

Docs and posts with URLs and in-text pull-quotes do better than another list of citations at the end.

> - you don't need frequent contact with the team if the goals and constraints are communicated very clearly

Metrics: OKRs, KPIs, #GlobalGoals Goals Targets and Indicators

Tools / Methods; Data / Information / Knowledge / Experience / Wisdom:

- Issues: Title, - [ ] Description, Labels, Assignee, - [ ] Comments, Emoji Reactions;

- Pull Requests, - [ ] [Optional] [Formal] Reviews, Labels & "Codelabels", label:SkipPreflight, CI Build Logs, and Signed Deployed Documented Applications; code talks, the tests win again, docs sell

- Find and Choose - with Consensus - a sufficiently mature Component that already testably does: unified Email notifications (with inbound replies,) and notifications on each and every Chat API and the web standard thing finally, thanks: W3C Web Notifications.

- Contribute Tests for [open source] Components.

- [ ] Create a workflow document with URLs and Text Templates

- [ ] Create a daily running document with my 3 questions and headings and indented markdown checkbox lists; possibly also with todotxt/todo.txt / TaskWarrior & BugWarrior -style lineitem markup.

What does an engineering manager do all day?

A polite answer would be, continuously reevaluate the tests of the product and probably also the business model if anyone knew what they were up to in there

[-]

Using two keyboards at once for pain relief

[+]
[+]

The MS Natural split keyboards are easy to find but aren't satisfyingly clicky mechanical keys just like olden times.

How long do these last?

Edit: "Ergonomic keyboard" https://en.wikipedia.org/wiki/Ergonomic_keyboard > #Split_keyboard:

> Split keyboards group keys into two or more sections. Ergonomic split keyboards can be fixed, where you cannot change the positions of the sections, or adjustable. Split keyboards typically change the angle of each section, and the distance between them. On an adjustable split keyboard, this can be tailored exactly to the user. People with a broad chest will benefit from an adjustable split keyboard's ability to customize the distance between the two halves of the board. This ensures the elbows are not too close together when typing. [2]

[-]

Waydroid – Run Android containers on Ubuntu

[+]
[+]
[+]

> binfmt_misc

https://en.wikipedia.org/wiki/Binfmt_misc

> binfmt_misc can also be combined with QEMU to execute programs for other processor architectures as if they were native binaries.[9]

QEMU supported [ARM guest] machines: https://wiki.qemu.org/Documentation/Platforms/ARM#Supported_...

Edit: from "Running and Building ARM Docker Containers on x86" (which also describes how to get CUDA working) https://www.stereolabs.com/docs/docker/building-arm-containe... :

  sudo apt-get install qemu binfmt-support qemu-user-static # Install the qemu packages
  docker run --rm --privileged multiarch/qemu-user-static --reset -p yes # Execute the registering scripts

  docker run --rm -t arm64v8/ubuntu uname -m # Test the emulation environment
https://github.com/multiarch/qemu-user-static :

> multiarch/qemu-user-static is to enable an execution of different multi-architecture containers by QEMU [1] and binfmt_misc [2]. Here are examples with Docker [3].

Why the heck isn't there just an official Android container and/or a LineageOS container?

It's not a certified device, so.

There are a number of ways to build "multi-arch docker images" e.g. for both x86 and ARM: OCI, docker build, podman build, buildx, buildah.

Containers are testable.

Here's this re: whether the official OpenWRT container should run /sbin/init in order to run procd, ubusd,: https://github.com/docker-library/official-images/pull/7975#...

AFAIU, from a termux issue thread re: repackaging everything individually, latest Android requires binaries to be installed from APKs to get the SELinux context label necessary to run?

[-]

Biologists Rethink the Logic Behind Cells’ Molecular Signals

[+]
[+]

Most cells or matter in the body?

From https://www.nature.com/articles/nature.2016.19136 :

> A 'reference man' (one who is 70 kilograms, 20–30 years old and 1.7 metres tall) contains on average about 30 trillion human cells and 39 trillion bacteria, […] Those numbers are approximate — another person might have half as many or twice as many bacteria, for example — but far from the 10:1 ratio commonly assumed.

Symbiosis https://en.wikipedia.org/wiki/Symbiosis :

> Symbiosis […] is any type of a close and long-term biological interaction between two different biological organisms, be it mutualistic, commensalistic, or parasitic. […]

> Symbiosis can be obligatory, which means that one or more of the symbionts depend on each other for survival, or facultative (optional), when they can generally live independently. […]

> Symbiosis is also classified by physical attachment. When symbionts form a single body it is called conjunctive symbiosis, while all other arrangements are called disjunctive symbiosis.[3] When one organism lives on the surface of another, such as head lice on humans, it is called ectosymbiosis; when one partner lives inside the tissues of another, such as Symbiodinium within coral, it is termed endosymbiosis.

Endosymbiont: https://en.wikipedia.org/wiki/Endosymbiont :

> Two major types of organelle in eukaryotic cells, mitochondria and plastids such as chloroplasts, are considered to be bacterial endosymbionts.[6] This process is commonly referred to as symbiogenesis.

Symbiogenesis: https://en.wikipedia.org/wiki/Symbiogenesis #Secondary_endosymbiosis ... Viral eukaryogenesis: https://en.wikipedia.org/wiki/Viral_eukaryogenesis :

> A number of precepts in the theory are possible. For instance, a helical virus with a bilipid envelope bears a distinct resemblance to a highly simplified cellular nucleus (i.e., a DNA chromosome encapsulated within a lipid membrane). In theory, a large DNA virus could take control of a bacterial or archaeal cell. Instead of replicating and destroying the host cell, it would remain within the cell, thus overcoming the tradeoff dilemma typically faced by viruses. With the virus in control of the host cell's molecular machinery, it would effectively become a functional nucleus. Through the processes of mitosis and cytokinesis, the virus would thus recruit the entire cell as a symbiont—a new way to survive and proliferate.

T-Cell # Activation: https://en.wikipedia.org/wiki/T_cell#Activation

> Both are required for production of an effective immune response; in the absence of co-stimulation, T cell receptor signalling alone results in anergy. […]

> Once a T cell has been appropriately activated (i.e. has received signal one and signal two) it alters its cell surface expression of a variety of proteins.

T-cell receptor § Signaling pathway: https://en.wikipedia.org/wiki/T-cell_receptor#Signaling_path...

Co-stimulation : https://en.wikipedia.org/wiki/Co-stimulation :

> Co-stimulation is a secondary signal which immune cells rely on to activate an immune response in the presence of an antigen-presenting cell.[1] In the case of T cells, two stimuli are required to fully activate their immune response. During the activation of lymphocytes, co-stimulation is often crucial to the development of an effective immune response. Co-stimulation is required in addition to the antigen-specific signal from their antigen receptors.

Anergy: https://en.wikipedia.org/wiki/Clonal_anergy :

> [Clonal] Anergy is a term in immunobiology that describes a lack of reaction by the body's defense mechanisms to foreign substances, and consists of a direct induction of peripheral lymphocyte tolerance. An individual in a state of anergy often indicates that the immune system is unable to mount a normal immune response against a specific antigen, usually a self-antigen. Lymphocytes are said to be anergic when they fail to respond to their specific antigen. Anergy is one of three processes that induce tolerance, modifying the immune system to prevent self-destruction (the others being clonal deletion and immunoregulation ).[1]

Clonal deletion: https://en.wikipedia.org/wiki/Clonal_deletion :

> There are millions of B and T cells inside the body, both created within the bone marrow and the latter matures in the thymus, hence the T. Each of these lymphocytes express specificity to a particular epitope, or the part of an antigen to which B cell and T cell receptors recognize and bind. There is a large diversity of epitopes recognized and, as a result, it is possible for some B and T lymphocytes to develop with the ability to recognize self.[4] B and T cells are presented with self antigen after developing receptors while they are still in the primary lymphoid organs.[3][4] Those cells that demonstrate a high affinity for this self antigen are often subsequently deleted so they cannot create progeny, which helps protect the host against autoimmunity.[2][3] Thus, the host develops a tolerance for this antigen, or a self tolerance.[3]

"DNA threads released by activated CD4+ T lymphocytes provide autocrine costimulation" (2019) https://www.pnas.org/content/116/18/8985

> A growing body of literature has shown that, aside from carrying genetic information, both nuclear and mitochondrial DNA can be released by innate immune cells and promote inflammatory responses. Here we show that when CD4+ T lymphocytes, key orchestrators of adaptive immunity, are activated, they form a complex extracellular architecture composed of oxidized threads of DNA that provide autocrine costimulatory signals to T cells. We named these DNA extrusions “T helper-released extracellular DNA” (THREDs).

FWIU, there's also a gut-brain pathway? Or is that also this "signaling method" for feedback in symbiotic complex dynamic systems?

From https://en.wikipedia.org/wiki/Complex_system :

> Complex systems are systems whose behavior is intrinsically difficult to model due to the dependencies, competitions, relationships, or other types of interactions between their parts or between a given system and its environment. Systems that are "complex" have distinct properties that arise from these relationships, such as nonlinearity, emergence, spontaneous order, adaptation, and feedback loops, among others. Because such systems appear in a wide variety of fields, the commonalities among them have become the topic of their independent area of research. In many cases, it is useful to represent such a system as a network where the nodes represent the components and links to their interactions.

Graph, Hypergraph, Property graph, Linked Data, AtomSpace, RDF* + SPARQL*, ONNX, {...}

> The term complex systems often refers to the study of complex systems, which is an approach to science that investigates how relationships between a system's parts give rise to its collective behaviors and how the system interacts and forms relationships with its environment.[1] The study of complex systems regards collective, or system-wide, behaviors as the fundamental object of study; for this reason, complex systems can be understood as an alternative paradigm to reductionism, which attempts to explain systems in terms of their constituent parts and the individual interactions between them.

A multi-digraph of probably nonlinear relations may not be the best way to describe the fields of even just a few electroweak magnets?

> As an interdisciplinary domain, complex systems draws contributions from many different fields, such as the study of self-organization and critical phenomena from physics, that of spontaneous order from the social sciences, chaos from mathematics, adaptation from biology, and many others. Complex systems is therefore often used as a broad term encompassing a research approach to problems in many diverse disciplines, including statistical physics, information theory, nonlinear dynamics, anthropology, computer science, meteorology, sociology, economics, psychology, and biology.

... Glossary of Systems Theory: https://en.wikipedia.org/wiki/Glossary_of_systems_theory

The Shunting-yard algorithm converts infix notation to RPN

RosettaCode has examples of the Shunting-Yard algorithm for parsing infix notation ((1+2)*3)^4 to an AST or just a stack of data and operators such as RPN: [ ]

Parsing/Shunting-yard algorithm: https://rosettacode.org/wiki/Parsing/Shunting-yard_algorithm

Parsing/RPN to infix conversion: https://rosettacode.org/wiki/Parsing/RPN_to_infix_conversion...

Applications: testing all combinations of operators with and without term grouping; parentheses; such as evolutionary algorithms or universal function approximaters that explore the space.

For example: https://github.com/westurner/notebooks/blob/gh-pages/maths/b... :

> This still isn't the complete set of possible solutions

[-]

[deleted]

[-]

How should logarithms be taught?

As one shape of a curve; in a notebook that demonstrates multiple methods of curve fitting with and without a logarithmic transform.

Logarithm: https://simple.wikipedia.org/wiki/Logarithm ; https://en.wikipedia.org/wiki/Logarithm :

> In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a given number x is the exponent to which another fixed number, the base b, must be raised, to produce that number x.

List of logarithmic identities: https://en.wikipedia.org/wiki/List_of_logarithmic_identities

List of integrals of logarithmic functions: https://en.wikipedia.org/wiki/List_of_integrals_of_logarithm...

As functions in a math library or a CAS that should implement the correct axioms correctly:

Sympy Docs > Functions > Contents: https://docs.sympy.org/latest/modules/functions/index.html#c...

sympy.functions.elementary.exponential. log(x, base=e) == log(x)/log(e), exp(), LambertW(), exp_polar() https://docs.sympy.org/latest/modules/functions/elementary.h...

"Exponential, Logarithmic and Trigonometric Integrals" sympy.functions.special.error_functions. Ei: exponential integral, li: logarithmic integral, Li: offset logarithmic integral https://docs.sympy.org/latest/modules/functions/special.html...

numpy.log. log() base e, log2(), log10(), log1p(x) == log(1 + x) https://numpy.org/doc/stable/reference/generated/numpy.log.h...

numpy.exp. exp(), expm1(x) == exp(x) - 1, exp2(x) == 2*x https://numpy.org/doc/stable/reference/generated/numpy.exp.h...

Khan Academy > Algebra 2 > Unit: Logarithms: https://www.khanacademy.org/math/algebra2/x2ec2f6f830c9fb89:...

Khan Academy > Algebra (all content) > Unit: Exponential & logarithmic functions https://www.khanacademy.org/math/algebra-home/alg-exp-and-lo...

3blue1brown: "Logarithm Fundamentals | Lockdown math ep. 6", "What makes the natural log "natural"? | Lockdown math ep. 7" https://www.youtube.com/playlist?list=PLZHQObOWTQDP5CVelJJ1b...

Feynmann Lectures 22-6: Algebra > Imaginary Exponents: https://www.feynmanlectures.caltech.edu/I_22.html#Ch22-S6

Power law functions: https://en.wikipedia.org/wiki/Power_law#Power-law_functions

In a two-body problem, of the 4-5 fundamental interactions: Gravity, Electroweak interaction, Strong interaction, Higgs interaction, a fifth force; which have constant exponential terms in their symbolic field descriptions? https://en.wikipedia.org/wiki/Fundamental_interaction#The_in...

Natural logs in natural systems:

Growth curve (biology) > Exponential growth: https://en.wikipedia.org/wiki/Growth_curve_(biology)#Exponen...

Basic reproduction number: https://en.wikipedia.org/wiki/Basic_reproduction_number

(... Growth hacking; awesome-grwoth-hacking: https://github.com/bekatom/awesome-growth-hacking )

Metcalf's law: https://en.wikipedia.org/wiki/Metcalfe%27s_law

Moore's law; doubling time: https://en.wikipedia.org/wiki/Moore's_law

A block reward halving is a doubling of difficulty. What block reward difficulty schedule would be a sufficient inverse of Moore's law?

A few queries:

logarithm cheatsheet https://www.google.com/search?q=logarithm+cheatsheet

logarithm on pinterest https://www.pinterest.com/search/pins/?q=logarithm

logarithm common core worksheet https://www.google.com/search?q=logarithm+common+core+worksh...

logarithm common core autograded exercise (... Khan Academy randomizes from a parametrized (?) test bank for unlimited retakes for Mastery Learning) https://www.google.com/search?q=logarithm+common+core+autogr...

If only I had started my math career with a binder of notebooks or at least 3-hole-punched notes.

- [ ] Create a git repo with an environment.yml that contains e.g. `mamba install -y jupyter-book jupytext jupyter_contrib_extensions jupyterlab-git nbdime jupyter_console pandas matplotlib sympy altair requests-html`, build a container from said repo with repo2docker, and git commit and push changes made from within the JupyterLab instance that repo2docker layers on top of your reproducible software dependency requirement specification ("REES"). {bash/zsh, git, docker, repo2docker, jupyter, [MyST] markdown and $$ mathTeX $$; Google Colab, Kaggle Kernels, ml-workspace, JupyterLite}

"How I'm able to take notes in mathematics lectures using LaTeX and Vim" https://news.ycombinator.com/item?id=19448678

Here's something like MyST Markdown or Rmarkdown for Jupyter-Book and/or jupytext:

## Log functions

Log functions in the {PyData} community

### LaTeX

#### sympy2latex

What e.g. sympy2latex parses that LaTeX into, in terms of symbolic objects in an expression tree:

### numpy

see above

### scipy

### sympy

see above

### sagemath

### statsmodels

### TensorFlow

### PyTorch

## Logarithmic and exponential computational complexity

- Docs: https://www.bigocheatsheet.com/

- [ ] DOC: Rank these with O(1) first: O(n log n), O(log n), O(1), O(n), O(n*2) +growthcurve +exponential

## Combinatorics, log, exp, and Shannon classical entropy and classical Boolean bits

https://www.google.com/search?q=formula+for+entropy :

  S=k_{b}\ln\Omega
Entropy > Statistical mechanics: https://en.wikipedia.org/wiki/Entropy#Statistical_mechanics

SI unit for [ ] entropy: joules per kelvin (J*K*-1)

*****

In terms of specifying tasks for myself in order to learn {Logarithms,} I could use e.g. todo.txt markup to specify tasks with [project and concept] labels and contexts; but todo.txt doesn't support nested lists like markdown checkboxes with todo.txt markup and/or codelabels (if it's software math)

  - [ ] Read the Logarithms wikipedia page <url> and take +notes +math +logarithms @workstation
    - [o] Read
    - [x] BLD: mathrepo: generate from cookiecutter or nbdev
    - [ ] DOC: mathrepo: logarithm notes
    - [ ] DOC,ART: mathrepo: create exponential and logarithmic charts +logarithms @workstation
    - [ ] ENH,TST,DOC: mathrepo: logarithms with stdlib math, numpy, sympy (and *pytest* or at least `assert` assertion expressions)
    - [ ] ENH,TST,DOC: mathrepo: logarithms and exponents with NN libraries (and *pytest*)
Math (and logic; ultimately thermodynamics) transcend disciplines. To bikeshed - to worry about a name that can be sed-replaced later - but choose a good variable name now, Is 'mathrepo' the best scope for this project? Smaller dependency sets (i.e. simpler environment.yml) seem to result in less version conflicts. `conda env export --from-history; mamba env export --from-history; pip freeze; pipenv -h; poetry -h`

### LaTeX

  $$ \log_{b} x = (b^? = x) $$
  $$ 2^3 = 8 $$
  $$ \log_{2} 8 = 3 $$
  $$ \ln e = 1 $$
  $$ \log_b(xy)=\log_b(x)+\log_b(y) $$

  $ \begin{align}
  \textit{(1) } \log_b(xy) & = \log_b(x)+\log_b(y)
  \end{align} $
Sources: https://en.wikipedia.org/w/index.php?title=List_of_logarithm... ,

#### sympy2latex

What e.g. sympy2latex parses that LaTeX into, in terms of symbolic objects in an expression tree:

  # install
  #!python -m pip install antlr4-python3-runtime sympy
  #!mamba install -y -q antlr-python-runtime sympy
  
  import sympy
  from sympy.parsing.latex import parse_latex
  
  def displaylatexexpr(latex):
      expr = parse_latex(latex)
      display(str(expr))
      display(expr)
      return expr
    
  displaylatexexpr('\log_{2} 8'))
  # 'log(8, 2)'
  displaylatexexpr('\log_{2} 8 = 3'))
  # 'Eq(log(8, 2), 3)'
  displaylatexexpr('\log_b(xy) = \log_b(x)+\log_b(y)'))
  # 'Eq(log(x*y, b), log(x, b) + log(y, b))'
  displaylatexexpr('\log_{b} (xy) = \log_{b}(x)+\log_{b}(y)')
  # 'Eq(log(x*y, b), log(x, b) + log(y, b))'
  displaylatexexpr('\log_{2} (xy) = \log_{2}(x)+\log_{2}(y)')
  # 'Eq(log(x*y, 2), log(x, 2) + log(y, 2))'

### python standard library

https://docs.python.org/3/library/operator.html#operator.pow

https://docs.python.org/3/library/math.html#power-and-logari...

math. exp(x), expm1(), log(x, base=e), log1p(x), log2(x), log10(x), pow(x, y) : float, assert sqrt() == pow(x, 1/2)

## scipy

https://docs.scipy.org/doc/scipy/reference/generated/scipy.s... scipy.special. xlog1py()

https://docs.scipy.org/doc/scipy/reference/generated/scipy.s...

### sagemath

https://doc.sagemath.org/html/en/reference/functions/sage/fu...

### statsmodels

### TensorFlow https://www.tensorflow.org/api_docs/python/tf/math tf.math. log(), log1P(), log_sigmoid(), exp(), expm1()

https://keras.io/api/layers/activations/

SmoothReLU ("softplus") adds ln to the ReLU activation function, for example: https://en.wikipedia.org/wiki/Rectifier_(neural_networks)#So...

E.g. Softmax & LogSumExp also include natural logarithms in their definitions: https://en.wikipedia.org/wiki/Softmax_function

### PyTorch

https://pytorch.org/docs/stable/generated/torch.log.html torch. log(), log10(), log1p(), log2(), exp(), exp2(), expm1(); logaddexp() , logaddexp2(), logsumexp(), torch.special.xlog1py()

***

Regarding this learning process and these tools, Now I have a few replies to myself (!) in not-quite-markdown and with various headings: I should consolidate this information into a [MyST] markdown Jupyter Notebook and re-lead the whole thing. If this was decent markdown from the start, I'd have less markup work to do to create a ScholarlyArticle / Notebook.

[-]

Automatic cipher suite ordering in Go’s crypto/tls

[+]
[+]

From "Go Crypto and Kubernetes — FIPS 140–2 and FedRAMP Compliance" (2021) https://gokulchandrapr.medium.com/go-crypto-and-kubernetes-f... :

> If a vendor wants to supply cloud-based services to the US Federal Government, then they have to get FedRAMP approval. This certification process covers a whole host of security issues, but is very specific about its requirements on cryptography: usage of FIPS 140–2 validated modules wherever cryptography is needed, these encryption standards protect the cryptographic module from being cracked, altered, or otherwise tampered with. FIPS 140–2 validated encryption is a prerequisite for FedRAMP. [...]

> [...] Go Cryptography and Kubernetes — FIPS 140–2 Kubernetes is a Go project, as are most of the Kubernetes subcomponents and ecosystem. Golang has a crypto standard library, Golang Crypto which fulfills almost all the application crypto needs (TLS stack implementation for HTTPS servers and clients all the way to HMAC or any other primitive that are needed to make signatures to verify hashes, encrypt messages.). Go has made a different choice compared to most languages, which usually come with links or wrappers for OpenSSL or simply don’t provide any cryptography in the standard library (Rust doesn’t have standard library cryptography, JavaScript only has web crypto, Python doesn’t come with a crypto standard library). [...]

> The native go crypto is not FIPS compliant and there are few open proposals to facilitate Go code to meet FIPS requirements. Users can use prominent go compilers/toolsets backed by FIPS validated SSL libraries provided by Google or Redhat which enables Go to bypass the standard library cryptographic routines and instead call into a FIPS 140–2 validated cryptographic library. These toolsets are available as container images, where users can use the same to compile any Go based applications. [...]

> When a RHEL system is booted in FIPS mode, Go will instead call into OpenSSL via a new package that bridges between Go and OpenSSL. This also can be manually enabled by setting `GOLANG_FIPS=1`. The Go Toolset is available as a container image that can be downloaded from Red Hat Container Registry. Red Hat mentions that this as a new feature built on top of existing upstream work (BoringSSL). [...]

> To be FIPS 140–2 compliant, the module must use FIPS 140–2 complaint algorithms, ciphers, key establishment methods, and other protection profiles.

> FIPS-approved algorithms do change at times; not extremely frequently, but more often than they come out with a new version of FIPS 140. [...]

> Some of the fundamental requirements (not limited to) are as follows:

> [...] Support for TLS 1.0 and TLS 1.1 is now deprecated (only allowed in certain cases). TLS 1.3 is the preferred option, while TLS 1.2 is only tolerated.

> [...] DSA/RSA/ECDSA are only approved for key generation/signature.

> [...] The 0-RTT option in TLS 1.3 should be avoided.

Was there lag between the release of TLS 1.3 and an updated release of FIPS 140? @18f @DefenseDigital Can those systems be upgraded as easily?

[+]
[-]

Scikit-Learn Version 1.0

m3at | 2021-09-14 04:50:14 | 260 | # | ^
[+]

There are scikit-learn (sklearn) API-compatible wrappers for e.g. PyTorch and TensorFlow.

Skorch: https://github.com/skorch-dev/skorch

tf.keras.wrappers.scikit_learn: https://www.tensorflow.org/api_docs/python/tf/keras/wrappers...

AFAIU, there are not Yellowbrick visualizers for PyTorch or TensorFlow; though PyTorch abd TensorFlow work with TensorBoard for visualizing CFG execution.

> Many machine learning libraries implement the scikit-learn `estimator API` to easily integrate alternative optimization or decision methods into a data science workflow. Because of this, it seems like it should be simple to drop in a non-scikit-learn estimator into a Yellowbrick visualizer, and in principle, it is. However, the reality is a bit more complicated.

> Yellowbrick visualizers often utilize more than just the method interface of estimators (e.g. `fit()` and `predict()`), relying on the learned attributes (object properties with a single underscore suffix, e.g. `coef_`). The issue is that when a third-party estimator does not expose these attributes, truly gnarly exceptions and tracebacks occur. Yellowbrick is meant to aid machine learning diagnostics reasoning, therefore instead of just allowing drop-in functionality that may cause confusion, we’ve created a wrapper functionality that is a bit kinder with it’s messaging.

Looks like there are Yellowbrick wrappers for XGBoost, CatBoost, CuML, and Spark MLib; but not for NNs yet. https://www.scikit-yb.org/en/latest/api/contrib/wrapper.html...

From the RAPIDS.ai CuML team: https://docs.rapids.ai/api/cuml/stable/ :

> cuML is a suite of fast, GPU-accelerated machine learning algorithms designed for data science and analytical tasks. Our API mirrors Sklearn’s, and we provide practitioners with the easy fit-predict-transform paradigm without ever having to program on a GPU.

> As data gets larger, algorithms running on a CPU becomes slow and cumbersome. RAPIDS provides users a streamlined approach where data is intially loaded in the GPU, and compute tasks can be performed on it directly.

CuML is not an NN library; but there are likely performance optimizations from CuDF and CuML that would accelerate performance of NNs as well.

Dask ML works with models with sklearn interfaces, XGBoost, LightGBM, PyTorch, and TensorFlow: https://ml.dask.org/ :

> Scikit-Learn API

> In all cases Dask-ML endeavors to provide a single unified interface around the familiar NumPy, Pandas, and Scikit-Learn APIs. Users familiar with Scikit-Learn should feel at home with Dask-ML.

dask-labextension for JupyterLab helps to visualize Dask ML CFGs which call predictors and classifiers with sklearn interfaces: https://github.com/dask/dask-labextension

[+]

Ctrl-F automl https://westurner.github.io/hnlog/

> /? hierarchical automl "sklearn" site:github.com : https://www.google.com/search?q=hierarchical+automl+%22sklea...

https://westurner.github.io/hnlog/#comment-18798244

> Dask-ML works with {scikit-learn, xgboost, tensorflow, TPOT,}. ETL is your responsibility. Loading things into parquet format affords a lot of flexibility in terms of (non-SQL) datastores or just efficiently packed files on disk that need to be paged into/over in RAM. (Edit)

scale-scikit-learn https://examples.dask.org/machine-learning/scale-scikit-lear... -> dask.distributed parallel predication: https://examples.dask.org/machine-learning/parallel-predicti...

"Hyperparameter optimization with Dask" https://examples.dask.org/machine-learning/hyperparam-opt.ht...

> Sklearn.pipeline.Pipeline API: {fit(), transform(), predict(), score(),} https://scikit-learn.org/stable/modules/generated/sklearn.pi... : ```

decision_function(X) # Apply transforms, and decision_function of the final estimator

fit(X[, y]) # Fit the model

fit_predict(X[, y]) # Applies fit_predict of last step in pipeline after transforms.

fit_transform(X[, y]) # Fit the model and transform with the final estimator

get_params([deep]) # Get parameters for this estimator.

predict(X, *predict_params) # Apply transforms to the data, and predict with the final estimator

predict_log_proba(X) # Apply transforms, and predict_log_proba of the final estimator

predict_proba(X) # Apply transforms, and predict_proba of the final estimator

score(X[, y, sample_weight]) # Apply transforms, and score with the final estimator

score_samples(X) # Apply transforms, and score_samples of the final estimator.

set_params(**kwargs) # Set the parameters of this estimator

```

> https://docs.featuretools.com can also minimize ad-hoc boilerplate ETL / feature engineering :

>> Featuretools is a framework to perform automated feature engineering. It excels at transforming temporal and relational datasets into feature matrices for machine learning

From https://featuretools.alteryx.com/en/stable/guides/using_dask... :

> Creating a feature matrix from a very large dataset can be problematic if the underlying pandas dataframes that make up the entities cannot easily fit in memory. To help get around this issue, Featuretools supports creating Entity and EntitySet objects from Dask dataframes. A Dask EntitySet can then be passed to featuretools.dfs or featuretools.calculate_feature_matrix to create a feature matrix, which will be returned as a Dask dataframe. In addition to working on larger than memory datasets, this approach also allows users to take advantage of the parallel and distributed processing capabilities offered by Dask

[-]

Signed Exchanges on Google Search

From https://blog.cloudflare.com/automatic-signed-exchanges/ :

> The broader implication of SXGs is that they make content portable: content delivered via an SXG can be easily distributed by third parties while maintaining full assurance and attribution of its origin. Historically, the only way for a site to use a third party to distribute its content while maintaining attribution has been for the site to share its SSL certificates with the distributor. This has security drawbacks. Moreover, it is a far stretch from making content truly portable.

> In the long-term, truly portable content can be used to achieve use cases like fully offline experiences. In the immediate term, the primary use case of SXGs is the delivery of faster user experiences by providing content in an easily cacheable format. Specifically, Google Search will cache and sometimes prefetch SXGs. For sites that receive a large portion of their traffic from Google Search, SXGs can be an important tool for delivering faster page loads to users.

> It’s also possible that all sites could eventually support this standard. Every time a site is loaded, all the linked articles could be pre-loaded. Web speeds across the board would be dramatically increased.

"Signed HTTP Exchanges" draft-yasskin-http-origin-signed-responses https://wicg.github.io/webpackage/draft-yasskin-http-origin-...

"Bundled HTTP Exchanges" draft-yasskin-wpack-bundled-exchanges https://wicg.github.io/webpackage/draft-yasskin-wpack-bundle... :

> Web bundles provide a way to bundle up groups of HTTP responses, with the request URLs and content negotiation that produced them, to transmit or store together. They can include multiple top-level resources with one identified as the default by a primaryUrl metadata, provide random access to their component exchanges, and efficiently store 8-bit resources.

From https://web.dev/web-bundles/ :

> Introducing the Web Bundles API. A Web Bundle is a file format for encapsulating one or more HTTP resources in a single file. It can include one or more HTML files, JavaScript files, images, or stylesheets.

> Web Bundles, more formally known as Bundled HTTP Exchanges, are part of the Web Packaging proposal.

> HTTP resources in a Web Bundle are indexed by request URLs, and can optionally come with signatures that vouch for the resources. Signatures allow browsers to understand and verify where each resource came from, and treats each as coming from its true origin. This is similar to how Signed HTTP Exchanges, a feature for signing a single HTTP resource, are handled.

[-]

AlphaGo documentary (2020) [video]

rdli | 2021-09-11 17:43:17 | 248 | # | ^
[+]
[+]

AlphaFold 2 solved the CASP protein folding problem that AFAIU e.g. Folding@home et. al have been churning at for awhile FWIU. From November 2020: https://deepmind.com/blog/article/alphafold-a-solution-to-a-...

https://en.wikipedia.org/wiki/AlphaFold#SARS-CoV-2 :

> AlphaFold has been used to a predict structures of proteins of SARS-CoV-2, the causative agent of COVID-19 [...] The team acknowledged that though these protein structures might not be the subject of ongoing therapeutical research efforts, they will add to the community's understanding of the SARS-CoV-2 virus.[74] Specifically, AlphaFold 2's prediction of the structure of the ORF3a protein was very similar to the structure determined by researchers at University of California, Berkeley using cryo-electron microscopy. This specific protein is believed to assist the virus in breaking out of the host cell once it replicates. This protein is also believed to play a role in triggering the inflammatory response to the infection (... Berkeley ALS and SLAC beamlines ... S309 & Sotrovimab: https://scitechdaily.com/inescapable-covid-19-antibody-disco... )

Is there yet an open implementation of AlphaFold 2? edit: https://github.com/search?q=alphafold ... https://github.com/deepmind/alphafold

How do I reframe this problem in terms of fundamental algorithmic complexity classes (and thus the Quantum Algorithm Zoo thing that might optimize the currently fundamentally algorithmically computationally hard part of the hot loop that is the cost driver in this implementation)?

To cite in full from the MuZero blog post from December 2020: https://deepmind.com/blog/article/muzero-mastering-go-chess-... :

> Researchers have tried to tackle this major challenge in AI by using two main approaches: lookahead search or model-based planning.

> Systems that use lookahead search, such as AlphaZero, have achieved remarkable success in classic games such as checkers, chess and poker, but rely on being given knowledge of their environment’s dynamics, such as the rules of the game or an accurate simulator. This makes it difficult to apply them to messy real world problems, which are typically complex and hard to distill into simple rules.

> Model-based systems aim to address this issue by learning an accurate model of an environment’s dynamics, and then using it to plan. However, the complexity of modelling every aspect of an environment has meant these algorithms are unable to compete in visually rich domains, such as Atari. Until now, the best results on Atari are from model-free systems, such as DQN, R2D2 and Agent57. As the name suggests, model-free algorithms do not use a learned model and instead estimate what is the best action to take next.

> MuZero uses a different approach to overcome the limitations of previous approaches. Instead of trying to model the entire environment, MuZero just models aspects that are important to the agent’s decision-making process. After all, knowing an umbrella will keep you dry is more useful to know than modelling the pattern of raindrops in the air.

> Specifically, MuZero models three elements of the environment that are critical to planning:

> * The value: how good is the current position?

> * The policy: which action is the best to take?

> * The reward: how good was the last action?

> These are all learned using a deep neural network and are all that is needed for MuZero to understand what happens when it takes a certain action and to plan accordingly.

> Illustration of how Monte Carlo Tree Search can be used to plan with the MuZero neural networks. Starting at the current position in the game (schematic Go board at the top of the animation), MuZero uses the representation function (h) to map from the observation to an embedding used by the neural network (s0). Using the dynamics function (g) and the prediction function (f), MuZero can then consider possible future sequences of actions (a), and choose the best action.

> MuZero uses the experience it collects when interacting with the environment to train its neural network. This experience includes both observations and rewards from the environment, as well as the results of searches performed when deciding on the best action.

> During training, the model is unrolled alongside the collected experience, at each step predicting the previously saved information: the value function v predicts the sum of observed rewards (u), the policy estimate (p) predicts the previous search outcome (π), the reward estimate r predicts the last observed reward (u). This approach comes with another major benefit: MuZero can repeatedly use its learned model to improve its planning, rather than collecting new data from the environment. For example, in tests on the Atari suite, this variant - known as MuZero Reanalyze - used the learned model 90% of the time to re-plan what should have been done in past episodes.

FWIU, from what's going on over there:

AlphaGo => AlphaGo {Fan, Lee, Master, Zero} => AlphaGoZero => AlphaZero => MuZero

AlphaGo: https://en.wikipedia.org/wiki/AlphaGo_Zero

AlphaZero: https://en.wikipedia.org/wiki/AlphaZero

MuZero: https://en.wikipedia.org/wiki/MuZero

AlphaFold {1,2}: https://en.wikipedia.org/wiki/AlphaFold

IIRC, there is not an official implementation of e.g. AlphaZero or MuZero with e.g. openai/gym (and openai/retro) for comparing reinforcement learning algorithms? https://github.com/openai/gym

What are the benchmarks for Applied RL?

From https://news.ycombinator.com/item?id=28499001 :

> AFAIU, while there are DLTs that cost CPU, RAM, and Data storage between points in spacetime, none yet incentivize energy efficiency by varying costs depending upon whether the instructions execute on a FPGA, ASIC, CPU, GPU, TPU, or QPU? [...]

> To be 200% green - to put a 200% green footer with search-discoverable RDFa on your site - I think you need PPAs and all directly sourced clean energy.

> (Energy efficiency is very relevant to ML/AI/AGI, because while it may be the case that the dumb universal function approximator will eventually find a better solution, "just leave it on all night/month/K12+postdoc" in parallel is a very expensive proposition with no apparent oracle; and then to ethically filter solutions still costs at least one human)

[+]

Libraries.io indexes software dependencies; but no Dependent packages or Dependent repositories are yet listed for the pypi:alphafold package: https://libraries.io/pypi/alphafold

The GitHub network/dependents view currently lists one repo that depends upon deepmind/alphafold: https://github.com/deepmind/alphafold/network/dependents

(Linked citations for science: How to cite a schema:SoftwareApplication in a schema:ScholarlyArticle , How to cite a software dependency in a dependency specification parsed by e.g. Libraries.io and/or GitHub. e.g. FigShare and Zenodo offer DOIs for tags of git repos, that work with BinderHub and repo2docker and hopefully someday repo2jupyterlite. https://westurner.github.io/hnlog/#comment-24513808 )

/?gscholar alphafold: https://scholar.google.com/scholar?q=alphafold

On a Google Scholar search result page, you can click "Cited by [ ]" to check which documents contain textual and/or URL citations gscholar has parsed and identified as indicating a relation to a given ScholarlyArticle.

/?sscholar alphafold: https://www.semanticscholar.org/search?q=alphafold

On a Semantic Scholar search result page, you can click the "“" to check which documents contain textual and/or URL citations Semantic Scholar has parsed and identified as indicating a relation to a given ScholarlyArticle.

/?smeta alphafold: https://www.meta.org/search?q=t---alphafold

On a Meta.org search result page, you can click the article title and scroll down to "Citations" to check which documents contain textual and/or URL citations Meta has parsed and identified as indicating a relation to a given ScholarlyArticle.

Do any of these use structured data like https://schema.org/ScholarlyArticle ? (... https://westurner.github.io/hnlog/#comment-28495597 )

[-]

Interpretable Model-Based Hierarchical RL Using Inductive Logic Programming

[+]
[+]

AutoML is RL? The entire exercise of publishing and peer review is an exercise in cybernetics?

https://en.wikipedia.org/wiki/Probabilistic_logic_network :

> The basic goal of PLN is to provide reasonably accurate probabilistic inference in a way that is compatible with both term logic and predicate logic, and scales up to operate in real time on large dynamic knowledge bases.

> The goal underlying the theoretical development of PLN has been the creation of practical software systems carrying out complex, useful inferences based on uncertain knowledge and drawing uncertain conclusions. PLN has been designed to allow basic probabilistic inference to interact with other kinds of inference such as intensional inference, fuzzy inference, and higher-order inference using quantifiers, variables, and combinators, and be a more convenient approach than Bayesian networks (or other conventional approaches) for the purpose of interfacing basic probabilistic inference with these other sorts of inference. In addition, the inference rules are formulated in such a way as to avoid the paradoxes of Dempster–Shafer theory.

Has anybody already taught / reinforced an OpenCog [PLN, MOSES] AtomSpace hypergraph agent to do Linked Data prep and also convex optimization with AutoML and better than grid search so gradients?

Perhaps teaching users to bias analyses with e.g. Yellowbrick and the sklearn APIs would be a good curriculum traversal?

opening/baselines "Logging and vizualizing learning curves and other training metrics" https://github.com/openai/baselines#logging-and-vizualizing-...

https://en.wikipedia.org/wiki/AlphaZero

There's probably an awesome-automl by now? Again, the sklearn interfaces.

TIL that SymPy supports NumPy, PyTorch, and TensorFlow [Quantum; TFQ?]; and with a Computer Algebra System something for mutating the AST may not be necessary for symbolic expression trees without human-readable comments or symbol names? Lean mathlib: https://github.com/leanprover-community/mathlib , and then reasoning about concurrent / distributed systems (with side channels in actual physical component space) with e.g. TLA+.

There are new UUID formats that are timestamp-sortable; for when blockchain cryptographic hashes aren't enough entropy. "New UUID Formats – IETF Draft" https://news.ycombinator.com/item?id=28088213

... You can host online ML algos through SingularityNet, which also does PayPal now for the RL.

Our visual / auditory biological neural networks do appear to be hierarchical and relatively highly plastic as well.

If you're planning to mutate, crossover, and select expression trees, you'll need a survival function (~cost function) in order to reinforce; RL.

Blockchains cost immutable data storage with data integrity protections by the byte.

Smart contracts cost CPU usage with costed opcodes. eWASM (Ethereum WebAssembly) has costed opcodes for redundantly-executed smart contracts (that execute on n nodes of a shard) https://ewasm.readthedocs.io/en/mkdocs/determining_wasm_gas_...

AFAIU, while there are DLTs that cost CPU, RAM, and Data storage between points in spacetime, none yet incentivize energy efficiency by varying costs depending upon whether the instructions execute on a FPGA, ASIC, CPU, GPU, TPU, or QPU?

To be 200% green - to put a 200% green footer with search-discoverable RDFa on your site - I think you need PPAs and all directly sourced clean energy.

(Energy efficiency is very relevant to ML/AI/AGI, because while it may be the case that the dumb universal function approximator will eventually find a better solution, "just leave it on all night/month/K12+postdoc" in parallel is a very expensive proposition with no apparent oracle; and then to ethically filter solutions still costs at least one human)

> Perhaps teaching users to bias analyses with e.g. Yellowbrick and the sklearn APIs would be a good curriculum traversal?

Yellowbrick > Third Party Estimaters: (yellowbrick.contrib.wrapper: https://www.scikit-yb.org/en/latest/api/contrib/wrapper.html

From https://www.scikit-yb.org/en/latest/quickstart.html#using-ye... :

> The Yellowbrick API is specifically designed to play nicely with scikit-learn. The primary interface is therefore a Visualizer – an object that learns from data to produce a visualization. Visualizers are scikit-learn Estimator objects and have a similar interface along with methods for drawing. In order to use visualizers, you simply use the same workflow as with a scikit-learn model, import the visualizer, instantiate it, call the visualizer’s fit() method, then in order to render the visualization, call the visualizer’s show() method.

> For example, there are several visualizers that act as transformers, used to perform feature analysis prior to fitting a model. The following example visualizes a high-dimensional data set with parallel coordinates:

  from yellowbrick.features import ParallelCoordinates
  
  visualizer = ParallelCoordinates()
  visualizer.fit_transform(X, y)
  visualizer.show()
> As you can see, the workflow is very similar to using a scikit-learn transformer, and visualizers are intended to be integrated along with scikit-learn utilities. Arguments that change how the visualization is drawn can be passed into the visualizer upon instantiation, similarly to how hyperparameters are included with scikit-learn models.

IIRC, some automl tools - which test various combinations of, stacks of, ensembles of e.g. Estimators - do test hierarchical ensembles? Are those 'piecewise' and ultimately not the unified theory we were looking for here either (but often a good enough, fast enough, sufficient approximate solution with a sufficiently low error term)?

/? hierarchical automl "sklearn" site:github.com : https://www.google.com/search?q=hierarchical+automl+%22sklea...

[-]

Ship / Show / Ask: A modern branching strategy

[+]
[+]
[+]

> Where I currently work, we have "skip review" and "skip preflight" labels for this. The mergers have the power to merge anything anyway, the labels are only to make it an official request.

From the OP:

> Changes are categorized as either Ship (merge into mainline without review), Show (open a pull request for review, but merge into mainline immediately), or Ask (open a pull request for discussion before merging).

[+]

Checklists are often a good thing; and an opportunity to optimize processes with team feedback!

"Post-surgical deaths in Scotland drop by a third, attributed to a checklist" https://news.ycombinator.com/item?id=19684376 https://westurner.github.io/hnlog/#comment-19684376

[-]

Show HN: TweeView – A Tree Visualisation of Twitter Conversations

[+]
[+]

The 4D view looks a bit like Gource with the wawa aura and all.

Is there anything that finds cycles in the tweet graph (quote tweet "edges")? And unshortened link frequencies, maybe

[-]

Wireless Charging Power Side-Channel Attacks

[+]

> assume the mentality that all consumer devices connected to the internet should be treated as insecure by default.

"Zero trust security model" https://en.wikipedia.org/wiki/Zero_trust_security_model :

> The main concept behind zero trust is that devices should not be trusted by default, even if they are connected to a managed corporate network such as the corporate LAN and even if they were previously verified.

[+]

From https://planetfriendlyweb.org/mental-model :

> When you think about how a digital product or website creates an environmental impact, you can think of it creating it in three main ways - through the Packets of data it sends to users, the Platform the product runs on, and the Process used to make the product itself.

From https://sustainableux.com/talks/2018/how-to-build-a-planet-f... :

> SustainableUX: design vs. climate change. Online, Worldwide, Free. The online event for UX, front-end, and product people who want to make a positive impact—on climate-change, social equality, and inclusion

[-]

How We Proved the Eth2 Deposit Contract Is Free of Runtime Errors

[+]

From "Discover and Prevent Linux Kernel Zero-Day Exploit Using Formal Verification" https://news.ycombinator.com/item?id=27442273 :

> [Coq, VST, CompCert]

> Formal methods: https://en.wikipedia.org/wiki/Formal_methods

> Formal specification: https://en.wikipedia.org/wiki/Formal_specification

> Implementation of formal specification: https://en.wikipedia.org/wiki/Anti-pattern#Software_engineer...

> Formal verification: https://en.wikipedia.org/wiki/Formal_verification

> From "Why Don't People Use Formal Methods?" https://news.ycombinator.com/item?id=18965964 :

>> Which universities teach formal methods?

>> - q=formal+verification https://www.class-central.com/search?q=formal+verification

>> - q=formal+methods https://www.class-central.com/search?q=formal+methods

>> Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs?

[+]
[-]

Physics-Based Deep Learning Book

[+]
[+]
[+]

"Physics-informed neural networks" https://en.wikipedia.org/wiki/Physics-informed_neural_networ...

But what about statistical thermodynamics and information theory? What about thin film?

What are some applications for PINNs and for {DL, RL,} in physics?

[-]

Ask HN: Books that teach you programming languages via systems projects?

Foe | 2021-09-10 03:38:41 | 204 | # | ^

Looking for a book/textbook that teaches you a programming language through systems (or vice versa). For example, a book that teaches modern C++ by showing you how to program a compiler; a book that teaches operating systems and the language of choice in the book is Rust; a book that teaches database internals through Golang; etc. Basically, looking for a fun project-based book that I can walk through and spend my free time working through.

Any recommendations?

From "Ask HN: What are some books where the reader learns by building projects?" https://news.ycombinator.com/item?id=26042447 :

> "Agile Web Development with Rails [6]" (2020) teaches TDD and agile in conjunction with a DRY, CoC, RAD web application framework: https://g.co/kgs/GNqnWV

And:

> "ugit – Learn Git Internals by Building Git in Python" https://www.leshenko.net/p/ugit/

[-]

How you can track your personal finances using Python

> We take the output of the previous step, pipe everything over to our .beancount file, and "balance" transactions.

> Recall that the flow of money in double-entry accounting is represented using transactions involving at least two accounts. When you download CSVs from your bank, each line in that CSV represents money that's either incoming or outgoing. That's only one leg of a transaction (credit or debit). It's up to us to provide the other leg.

> This act is called "balancing".

Balance (accounting) https://en.wikipedia.org/wiki/Balance_(accounting)

Are unique record IDs necessary for this [financial] application? FWICS, https://plaintextaccounting.org/ just throws away the (probably per-institution) transaction IDs; like a non-reflexive logic that eschews Law of identity? Just grep and wc?

> What does the ledger look like?

> I wrote earlier that one of the main things that Beancount provides is a language specification for defining financial transactions in a plain-text format.

> What does this format look like? Here's a quick example:

  option "title" "Alice"
  option "operating_currency" "EUR"

  ; Accounts
  2021-01-01 open Assets:MyBank:Checking
  2021-01-01 open Expenses:Rent

  2021-01-01 * "Landlord" "Thanks for the rent"
      Assets:MyBank:Checking     -1000.00 EUR
      Expenses:Rent               1000.00 EUR
What does the `*` do?

[+]
[+]
[+]

From https://news.ycombinator.com/item?id=28203393 :

> No, your personal data is not sold or rented or given away or bartered to parties that are not Plaid, your bank, or the connected app. We talk about all of this in our privacy policy, including ways that data could be used — for example, with data processors/service providers (like AWS which hosts our services) for the purposes of running Plaid’s services or for a user’s connected app to provide their services.

>> I saw that. Thank you for your patience and persistence in responding to so many pointed questions.

>> For any interested, here is a link to relevant section of the referenced privacy policy: https://plaid.com/legal/#consumers

>> I am also impressed by the Legal Changelog on the same page that clearly lays out a log of changes made to privacy & other published legal documents.

[+]

Are you making claims without evidence? Settling is not admission of guilt.

Banks should implement read-only OAuth APIs, so that users are not required to store their u/p/sqa answers.

From "Canada calls screen scraping ‘unsecure,’ sets Open Banking target for 2023" https://news.ycombinator.com/item?id=28229957 :

> AFAIU, there are still zero (0) consumer banking APIs with Read-Only e.g. OAuth APIs in the US as well?

Looks like there may be less than 3 so far.

> Banks could save themselves CPU, RAM, bandwidth, and liability by implementing read-only API tokens and methods that need only return JSON - instead of HTML or worse, monthly PDF tables for a fee - possibly similar to the Plaid API: https://plaid.com/docs/api/

> There is competition in consumer/retail banking, but still the only way to do e.g. budget and fraud analysis with third party apps is to give away all authentication factors: u/p/sqa; and TBH that's unacceptable.

> Traditional and distributed ledger service providers might also consider W3C ILP: Interledger Protocol (in starting their move to quantum-resistant ledgers by 2022 in order to have a 5 year refresh cycle before QC is a real risk by 2027, optimistically, for science) when reviewing the entropy of username+password_hash+security_question_answer strings in comparison to the entropy of cryptoasset account public key hash strings: https://interledger.org/developer-tools/get-started/overview...

[+]

How did their policies change before and after said settlement?

From https://my.plaid.com/help/360043065354-does-plaid-have-acces... :

> Does Plaid have access to my credentials?

> The type of connection Plaid has to your financial institution determines whether or not we have access to the login credentials for your financial account: your username and password.

> In many cases, when you link a financial institution to an app via Plaid, you provide your login credentials to us and we securely store them. We use those credentials to access and obtain information from your financial institution in order to provide that information, at your direction, to the apps and services you want to use. For more information on how we use your data, please refer to our End User Privacy Policy.

> In other cases, after you request that we link your financial institution to an app or service you want to use, you will be prompted to provide your login credentials directly to your financial institution––not to Plaid––and, upon successful authentication, your financial institution will then return your data to Plaid. In these cases, Plaid does not access or store your account credentials. Instead, your financial institution provides Plaid with a type of security identifier, which permits Plaid to securely reconnect to your financial institution at regularly scheduled intervals to keep your apps and services up-to-date.

> Regardless of which type of connection is made, we do not share your credentials with the apps or services you’ve linked to your financial institution via Plaid. You can read more about how Plaid handles data here.

What do you think this should say instead?

Do you think they use the same key to securely store all accounts, like ACH? Or no key, like the bank ledger that you're downloading a window of as CSV through hopefully a read-only SQL account, hopefully with data encrypted at rest and in motion.

When you download a CSV or a OFX to a local file, is the data then still encrypted at rest?

Again, US Banks can eliminate the need for {Plaid, Mint, } as the account data access middlemen by providing a read-only OAuth API. Because banks do not have a way to allow users to grant read-only access to their account ledgers, the only solution is to securely store the u/p/sqa. If you write a script to fetch your data and call it from cron, how can you decrypt the account credentials after an unattended reboot? When must a human enter key material to decrypt the stored u/p/sqa?

Here, we realize that banks should really have people that do infosec - that comprehend symmetric and assymetric cryptography - audits to point out these sorts of vulnerabilities and risks. And if they had kept current with the times, we would have a very different banking and finance information system architecture with fewer single points of failure.

[+]

Wow! Great work on an alternative.

[-]

CISA Lays Out Security Rules for Zero Trust Clouds

"Cloud Security Technical Reference Architecture (TRA)" (2021) https://cisa.gov/publication/cloud-security-technical-refere...

> The Cloud Security TRA provides agencies with guidance on the shared risk model for cloud service adoption (authored by FedRAMP), how to build a cloud environment (authored by USDS), and how to monitor such an environment through robust cloud security posture management (authored by CISA).

> Public Comment Period - NOW OPEN! CISA is releasing the Cloud Security TRA for public comment to collect critical feedback from agencies, industry, and academia to ensure the guidance fully addresses considerations for secure cloud migration. The public comment period begins Tuesday, September 7, 2021 and concludes on Friday, October 1, 2021. CISA is interested in gathering feedback focused on the following key questions: […]

"Zero Trust Maturity Model" (2021) https://cisa.gov/publication/zero-trust-maturity-model

> CISA’s Zero Trust Maturity Model is one of many roadmaps for agencies to reference as they transition towards a zero trust architecture. The goal of the maturity model is to assist agencies in the development of their zero trust strategies and implementation plans and present ways in which various CISA services can support zero trust solutions across agencies.

> The maturity model, which include five pillars and three cross-cutting capabilities, is based on the foundations of zero trust. Within each pillar, the maturity model provides agencies with specific examples of a traditional, advanced, and optimal zero trust architecture.

> Public Comment Period – NOW OPEN! CISA drafted the Zero Trust Maturity Model in June to assist agencies in complying with the Executive Order. While the distribution was originally limited to agencies, CISA is excited to release the maturity model for public comment.

> CISA is releasing the Zero Trust Maturity Model for public comment beginning Tuesday, September 7, 2021 and concludes on Friday, October 1, 2021. CISA is interested in gathering feedback focused on the following key questions: […]

[-]

Show HN: Heroku Alternative for Python/Django apps

[+]
[+]

dokku-scheduler-kubernetes https://github.com/dokku/dokku-scheduler-kubernetes#function...

> The following functionality has been implemented: Deployment and Service annotations, Domain proxy support via the Nginx Ingress Controller, Environment variables, Letsencrypt SSL Certificate integration via CertManager, Pod Disruption Budgets, Resource limits and reservations (reservations == kubernetes requests), Zero-downtime deploys via Deployment healthchecks, Traffic to non-web containers (via a configurable list)

[-]

SPDX Becomes Internationally Recognized Standard for Software Bill of Materials

From OP:

> Between eighty and ninety percent (80%-90%) of a modern application is assembled from open source software components. An SBOM accounts for the software components contained in an application — open source, proprietary, or third-party — and details their provenance, license, and security attributes. SBOMs are used as a part of a foundational practice to track and trace components across software supply chains. SBOMs also help to proactively identify software issues and risks and establish a starting point for their remediation.

> SPDX results from ten years of collaboration from representatives across industries, including the leading Software Composition Analysis (SCA) vendors – making it the most robust, mature, and adopted SBOM standard.

https://en.wikipedia.org/wiki/Software_Package_Data_Exchange

[-]

Show HN: Arxiv.org on IPFS

[+]

"Help compare Comment and Annotation services: moderation, spam, notifications, configurability" executablebooks/meta#102 https://github.com/executablebooks/meta/discussions/102 :

> jupyter-comment supports a number of commenting services [...]. In helping users decide which commenting and annotation services to include on their pages and commit to maintaining, could we discuss criteria for assessment and current features of services?

> Possible features for comparison:

> * Content author can delete / hide

> * Content author can report / block

> * Comments / annotations are screened by spam-fighting service

> * Content / author can label as e.g. toxic

> * Content author receives notification of new comments

> * Content author can require approval before user-contributed content is publicly-visible

> * Content author may allow comments for a limited amount of time (probably more relevant to BlogPostings)

> * Content author may simultaneously denounce censorship in all it's forms while allowing previously-published works to languish

#ForScience

FWIW, archiving repo2docker-compatible git repos with a DOI attached to a git tag, is possible with JupyterLite:

> JupyterLite is a JupyterLab distribution that runs entirely in the browser built from the ground-up using JupyterLab components and extensions

With JupyterLite, you can build a static archive of a repo2docker-like environment so that the ScholarlyArticle notebook or computer modern latex css, its SoftwareRelease dependencies, and possibly also the Datasets can be run in a browser tab with WASM. HTML + JS + WASM

[-]

New Texas Abortion Law Likely to Unleash a Torrent of Lawsuits Against Education

[+]
[+]
[+]
[+]

IDK, what do we say here? We're going to start needing to be making some changes?

Roman society context on this one:

Vestal virgins: https://en.wikipedia.org/wiki/Vestal_Virgin

Baiae: https://en.wikipedia.org/wiki/Baiae

https://pbsinternational.org/programs/underwater-pompeii/ :

> Baiae: an ancient Roman city lost to the same volcanoes that entombed Pompeii. But unlike Pompeii, Baiae sits under water, in the Bay of Naples. Nearly 2,000 years ago, the city was an escape for Rome’s rich and powerful elite, a place where they were free of the social restrictions of Roman society. But then the city sank into the ocean, to be forgotten in the annals of history. Now, a team of archaeologists is mapping the underwater ruins and piecing together what life was like in this playground for the rich. What made Baiae such a special place? And what happened to it?

Woe! Woe unto the obviously promiscuous.

[-]

DARPA grant to work on sensing and stimulating the brain noninvasively [video]

[+]
[+]
[+]

What about with realtime NIRS with an (inverse?) scattering matrix? From https://www.openwater.cc/technology :

> Below are examples of the image quality we have achieved with our breakthrough scanning systems that use just red and near-infrared light and ultrasound pings.

https://en.wikipedia.org/wiki/Near-infrared_spectroscopy

Another question: is it possible to do ah molecular identification similar to idk quantum crystallography with photons of any wavelength, such as NIRS? Could that count things in samples?

https://twitter.com/westurner/status/1239012387367387138 :

> ... quantum crystallography: https://en.wikipedia.org/wiki/Quantum_crystallography There's probably some limit to infrared crystallography that anyone who knows anything about particles and lattices would know about ?

[+]
[+]

Which other strong and weak forces could [photonic,] sensors detect?

IIUC, they're shooting for realtime MRI resolution with NIRS; to be used during surgery to assist surgery in realtime.

edit: https://en.wikipedia.org/wiki/Neural_oscillation#Overview says brainwaves are 1-150 Hz? IIRC compassion is acheivable on a bass guitar.

[+]

> Table with resolution differences between different techniques:

Looks like MEG has the best temporal and spatial resolutions.

[+]

You mentioned "time-domain", and I recalled "time-polarization".

From https://twitter.com/westurner/status/1049860034899927040 :

https://web.archive.org/web/20171003175149/https://www.omnis...

"Mind Control and EM Wave Polarization Transductions" (1999)

> To engineer the mind and its operations directly, one must perform electrodynamic engineering in the time * domain, not in the 3-space EM energy density domain.*

Could be something there.

Topological Axion antiferromagnet https://phys.org/news/2021-07-layer-hall-effect-2d-topologic... :

> Researchers believe that when it is fully understood, TAI can be used to make semiconductors with potential applications in electronic devices, Ma said. The highly unusual properties of Axions will support a new electromagnetic response called the topological magneto-electric effect, paving the way for realizing ultra-sensitive, ultrafast, and dissipationless sensors, detectors and memory devices.

Optical topological antennas https://engineering.berkeley.edu/news/2021/02/light-unbound-... :

> The new work, reported in a paper published Feb. 25 in the journal Nature Physics, throws wide open the amount of information that can be multiplexed, or simultaneously transmitted, by a coherent light source. A common example of multiplexing is the transmission of multiple telephone calls over a single wire, but there had been fundamental limits to the number of coherent twisted light waves that could be directly multiplexed.

Rydberg sensor https://phys.org/news/2021-02-quantum-entire-radio-frequency... :

> Army researchers built the quantum sensor, which can sample the radio-frequency spectrum—from zero frequency up to 20 GHz—and detect AM and FM radio, Bluetooth, Wi-Fi and other communication signals.

> The Rydberg sensor uses laser beams to create highly-excited Rydberg atoms directly above a microwave circuit, to boost and hone in on the portion of the spectrum being measured. The Rydberg atoms are sensitive to the circuit's voltage, enabling the device to be used as a sensitive probe for the wide range of signals in the RF spectrum.

> "All previous demonstrations of Rydberg atomic sensors have only been able to sense small and specific regions of the RF spectrum, but our sensor now operates continuously over a wide frequency range for the first time,"

Sometimes people make posters or presentations for new tech, in medicine.

The xMed Exponential Medicine conference / program is in November this year: https://twitter.com/ExponentialMed

Space medicine also presents unique constraints that more rigorously select from possible solutions: https://en.wikipedia.org/wiki/Space_medicine

There is no progress in medicine without volunteers for clinical research trials. https://en.wikipedia.org/wiki/Phases_of_clinical_research

https://clinicaltrials.gov/

[-]

New Ways to Be Told That Your Python Code Is Bad

[+]

As I recall, object? and object?? are and work in IPython because the Python mailing list said that the ternary operator was not reserved. (IIRC there was yet no formal grammar or collections.abc or maybe even datetime or json yet at the time).

Ternary expressions on one line require branch coverage to be enabled in your e.g. pytest; otherwise it'll look like the whole line is covered by tests when each branch on said line hasn't actually been tested.

  .get() -> Union[None, T]

[-]

Web-based editor

[+]
[+]
[+]

The ml-workspace docker image includes Git, Jupyter, VS Code, SSH, and "many popular data science libraries & tools" https://github.com/ml-tooling/ml-workspace

  docker run -p 8080:8080 -v "${PWD}:/workspace" mltooling/ml-workspace 
Cocalc-docker also includes Git, Jupyter, SSH, a collaborative LaTeX editor, a time slider, but no code-server or VScode out of the box: https://github.com/sagemathinc/cocalc-docker

  docker run --name=cocalc -d -v ~/cocalc:/projects -p 443:443 sagemathinc/cocalc

[-]

GitHub Copilot Generated Insecure Code in 40% of Circumstances During Experiment

[+]

> For comparison, what percentage of human-generated code is secure?

Yeah how did they measure? Did static and dynamic analysis find design bugs too?

Maybe - as part of a Copilot-assisted DevSecOps workflow involving static and dynamic analysis run by GitHub Actions CI - create Issues with CWE "Common Weakness Enumeration" URLs from e.g. the CWE Top 25 in order to train the team, and Pull Requests to fix each issue?: https://cwe.mitre.org/top25/

Which bots send PRs?

[-]

AAS Journals Will Switch to Open Access

[+]

> JOSS (Journal of Open Source Software) has managed to get articles indexed by Google Scholar [rescience_gscholar]. They publish their costs [joss_costs]: $275 Crossref membership, DOIs: $1/paper:

>> Assuming a publication rate of 200 papers per year this works out at ~$4.75 per paper

> [joss_costs]: https://joss.theoj.org/about#costs

^^ from https://news.ycombinator.com/item?id=24517711 & this log of my non- markdown non- W3C Web Annotation threaded comments with URIs: https://westurner.github.io/hnlog/#comment-24517711

[+]
[+]

[Scholarly] Code review tools; criteria and implementations?

Does JOSS specify e.g. ReviewBoard, GitHub Pull Request reviews, or Gerrit for code reviews?

[+]

Thanks for the citations. Looks like Wikipedia has "software review" and "software peer review":

https://en.wikipedia.org/wiki/Software_review

https://en.wikipedia.org/wiki/Software_peer_review

I'd add "Antipatterns" > "Software" https://en.wikipedia.org/wiki/Anti-pattern#Software_design

and "Code smells" > "Common code smells" https://en.wikipedia.org/wiki/Code_smell#Common_code_smells

and "Design smells" for advanced reviewers: https://en.wikipedia.org/wiki/Design_smell

and the CWE "Common Weakness Enumeration" numbers and thus URLs for Issues from the CWE Top 25 and beyond: https://cwe.mitre.org/top25/

FWIW, many or most scientists are not even trying to be software engineers: they just write slow code without reusing already-tested components and expect someone else to review Pull Requests after their PDF is considered impactful. They know enough coding to push the bar for their domain a bit higher each time.

Are there points for at least in-writing planing for the complete lifecycle and governance of an ongoing thesis defense of open source software for science; after we publish, what becomes of this code?

From https://joss.theoj.org/about#costs :

> Income: JOSS has an experimental collaboration with AAS publishing where authors submitting to one of the AAS journals can also publish a companion software paper in JOSS, thereby receiving a review of their software. For this service, JOSS receives a small donation from AAS publishing. In 2019, JOSS received $200 as a result of this collaboration.

[+]

Moderation costs money, too.

Additional ScholarlaryArticle "Journal" costs: moderation, BinderHub / JupyterLite white label SaaS?, hosting data and archived reproducible container images on IPFS and academictorrents and Git LFS, hosting {SQL, SPARQL, GraphQL,} queries and/or a SOLID HTTPS REST API and/or RSS feeds with dynamic content but static feed item URIs and/or ActivityStreams and/or https://schema.org/Action & InteractAction & https://schema.org/ReviewAction & ClaimReview fact check reviews, W3C Web Notifications, CRM + emailing list, keeping a legit cohort of impactful peer reviewers,

#LinkedData for #LinkedResearch: Dokieli, parsing https://schema.org/ScholarlyArticle citation styles,

> keeping a legit cohort of impactful peer reviewers, [who are time-constrained and unpaid, as well]

"Ask HN: How are online communities established?" https://news.ycombinator.com/item?id=24443965 re: building community, MCOS Marginal Cost of Service, CLV Customer Lifetime Value, etc

[-]

White House Launches US Digital Corps

[+]

> I've worked with state government as a volunteer advisor. They're still developing everything with waterfall. Only contracting out to big firms, even if it's a small project. Lawmakers and aides sit in a room and write down what is to be done.

The US Digital Services Playbook likely needs few modifications for use at state and local levels? https://github.com/usds/playbook#readme

"PLAY 1: Understand what people need" https://playbook.cio.gov/#play1

"PLAY 4: Build the service using agile and iterative practices" https://playbook.cio.gov/#play4

Do [lawmakers and aides] make good "Product Owners", stakeholders, [incentivized, gamified] app feedback capability utilizers? GitLab has Service Desk: you can email into the service desk email without having an account as necessary to create and follow up on [software] issues in GitHub/BitBucket/GitLab/Gitea project management sytems.

> That's changing at the federal level. They know they've got a problem. Why shouldn't federal software be as easy to use as the best web software? If you've ever tried to use it you will quickly learn that isn't the case.

"PLAY 3: Make it simple and intuitive" https://playbook.cio.gov/#play3

> Some sites will only work with IE and no other browser. Developers in two years can make a huge difference for making the government be more agile and operate better.

US Web Design Standards https://designsystem.digital.gov/

From https://github.com/uswds/uswds#browser-support :

>> We’ve designed the design system to support older and newer browsers through progressive enhancement. The current major version of the design system (2.0) follows the 2% rule: we officially support any browser above 2% usage as observed by analytics.usa.gov. Currently, this means that the design system version 2.0 supports the newest versions of Chrome, Firefox, Safari, and Internet Explorer 11 and up.

> I always suggest joining a local Code For America brigade. Work on a local project and see if it is for you. If you find yourself drawn to it then consider applying for a two year stint with the federal government. You can really make a difference!

From https://en.wikipedia.org/wiki/Code_for_America :

>> [...] described Code for America as "the technology world's equivalent of the Peace Corps or Teach for America". The article goes on to say, "They bring fresh blood to the solution process, deliver agile coding and software development skills, and frequently offer new perspectives on the latest technology—something that is often sorely lacking from municipal government IT programs. This is a win-win for cities that need help and for technologists that want to give back and contribute to lower government costs and the delivery of improved government service."

[-]

Launch HN: Litnerd (YC S21) – Teaching kids to read with the help of live actors

Hi HN, my name is Anisa and I am the founder of Litnerd (https://litnerd.com/), an online reading program designed to teach elementary school students in America how to read.

There are 37M elementary school students in America. Schools spend $20B on reading and supplemental education programs. Yet 42% of 4th grade students are reading at a 1st or 2nd grade proficiency level! The #1 reason students aren’t reading? They say it’s boring. We change that by bringing books to life. Think your favorite book turned into a tv-show style episode-by-episode reenactment, coupled with a complete curriculum and lesson plans.

1 in 8 Americans is functionally illiterate. Like any skill, reading is a habit. If you grew up in a household where you did not see your parents reading, you likely do not develop the habit. This correlates to the socio-economic divide. Two thirds of American students who lack reading skills by the end of fourth grade will rely on welfare as adults. To impact this, research suggests that we need to start at the earliest years.

I am passionate about the research in support of art and theatre as well as story-telling to improve childhood learning. Litnerd is the marriage of these interests. The inspiration comes from Sesame Street and Hamilton The Musical. In the late 60s, Joan Cooney decided to produce a children’s TV show that would influence children across America to learn to read—it became Sesame Street. Cooney researched her idea extensively, consulting with sociologists and scientists, and found that TV’s stickiness can be an important tool for education. Lin-Manuel Miranda took the story of Alexander Hamilton and brought it to life as a musical. Kids have learned more about Hamilton’s history thanks to Hamilton the Musical than any of their textbooks. In fact, this was the case so much that a program called EduHam is used to teach history in middle schools across the nation. When I heard that, the lightbulb went off and I decided to go all in on starting Litnerd.

We hire art and theatre professionals to recreate scenes directly from books in episode style format to bring the book to life, in a similar fashion to watching your favorite TV shows. We literally lead 'read out loud' in the classroom while the teacher/actor is acting out the main character in the book. We have a weekly designated Litnerd period in the schools/classes we serve and we live-stream in our teachers/actors for an interactive session (the students participate and read live with the actor as well as complete written lesson plans, phonetic exercises etc). We are currently serving 14,000 students in this manner.

The format of our program is such that if you don't complete the assigned reading and worksheets, you will feel like you are missing out on what is happening in later episodes. In this way, reading is layered in as a fundamental core to the program. Our program is part of scheduled classroom time.

A big part of our business involves curating content and materials that capture the interest and coolness-factor for elementary school students. We’ve found that students love choose-your-own-adventure style stories, especially ones involving mythical creatures—something about being able to have autonomy on the outcomes. So far, it seems to be working. We've even received fan mail from students! But we are obsessed with staying cool/relevant in our content.

Teachers like our product because it eases the burden placed on them. US teachers typically spend 4 to 10 hours a week (unpaid) planning their curriculum and $400-800 of their own money for classroom supplies. That's outrageous! When designing Litnerd, we wanted to ensure our product was not adding more work to their plate. Our programs are led by our own Resident Teaching Artists, who are live streamed into the classroom and remain in character to the episode as they teach the Litnerd curriculum built on top of the books. Our programs come with lesson plans, activity packets, curriculum correlations, educator resources, and complete ebooks.

Traditional K-12 education has extremely long sale cycles and is hard to break into. It can take years to become a contracted vendor, especially with large districts like NYC Department of Education. Because of my experience with my first YC backed startup that sold to government and nonprofits, coupled with my experience working at a large edtech company that built content for Higher Ed, I understand this sector and how to navigate the budget line item process.

Since launching in January, we have become contracted vendors with the New York City Department of Education (the largest education district in America). As a result, we’ve been growing at 60% MoM, are currently used by over 14k students in their classrooms and hit $110K in ARR. Our program is part of scheduled classroom time for elementary schools—not homework, and not extracurricular. Here’s a walkthrough video from a teacher’s perspective: https://www.loom.com/share/9ffc59f0d7ed4a66964003703bba7b94.

I am so grateful for the opportunity to share our story and mission with you. If you loved or struggled with reading as a kid, what factors do you think contributed? Also, if you have experience teachIng Elementary School or if you are a parent, I would love to hear your thoughts and ideas on how you foster reading amongst your students/children! I am excited to hear your feedback and ideas to help us inspire the next generation of readers.

[+]
[+]

TIL a new acronym word symbol lexeme: SEL: Social and Emotional Learning

> Social Emotional Learning (SEL) is an education practice that integrates social emotional skills into school curriculum. SEL is otherwise referred to as "socio-emotional learning" or "social-emotional literacy." When in practice, social emotional learning has equal emphasis on social and emotional skills to other subjects such as math, science, and reading.[1] The five main components of social emotional learning are self-awareness, self management, social awareness, responsible decision making, and relationship skills.

https://en.wikipedia.org/wiki/Social_and_Emotional_Learning

For good measure, Common Core English Language Arts standards: https://en.wikipedia.org/wiki/Common_Core_State_Standards_In...

Khan Academy has 2nd-9th Grade ELA exercises: English & Language Arts: https://www.khanacademy.org/ela

Unfortunately AFAIU there's not a good way to explore the Khan Academy Kids curriculum graph; which definitely does include reading: https://learn.khanacademy.org/khan-academy-kids/

> The app engages kids in core subjects like early literacy, reading, writing, language, and math, while encouraging creativity and building social-emotional skills

In terms of Phonemic awareness and Phonological awareness, is there a good a survey of US and World reading programs and their evidence-based basis, if any??

From https://en.wikipedia.org/wiki/Phonemic_awareness :

> Phonemic awareness is a subset of phonological awareness in which listeners are able to hear, identify and manipulate phonemes, the smallest mental units of sound that help to differentiate units of meaning (morphemes). Separating the spoken word "cat" into three distinct phonemes, /k/, /æ/, and /t/, requires phonemic awareness. The National Reading Panel has found that phonemic awareness improves children's word reading and reading comprehension and helps children learn to spell.[1] Phonemic awareness is the basis for learning phonics.[2]

> Phonemic awareness and phonological awareness are often confused since they are interdependent. Phonemic awareness is the ability to hear and manipulate individual phonemes. *Phonological awareness includes this ability, but it also includes the ability to hear and manipulate larger units of sound, such as onsets and rimes and syllables.*

What are some of the more evidence-based (?) (early literacy,) reading curricula? OTOH: LETRS, Heggerty, PAL: https://www.google.com/search?q=site%3Aen.wikipedia.org+%22l...

Looks like Cambium acquired e.g. Kurzweil Education in 2005?

More context:

Reading readiness in the United States: https://en.wikipedia.org/wiki/Reading_readiness_in_the_Unite...

Emergent literacies: https://en.wikipedia.org/wiki/Emergent_literacies

An interactive IPA chart with videos and readings linked with RDF (e.g. ~WordNet RDF) would be great. From "Duolingo's language notes all on one page" https://westurner.github.io/hnlog/#comment-26430146 :

> An IPA (International Phonetic Alphabet) reference would be helpful, too. After taking linguistics in college, I found these Sozo videos of US english IPA consonants and vowels that simultaneously show {the ipa symbol, example words, someone visually and auditorily producing the phoneme from 2 angles, and the spectrogram of the waveform} but a few or a configurable number of [spaced] repetitions would be helpful: https://youtu.be/Sw36F_UcIn8

> IDK how cartoonish or 3d of an "articulatory phonetic" model would reach the widest audience. https://en.wikipedia.org/wiki/Articulatory_phonetics

> IPA chart: https://en.wikipedia.org/wiki/International_Phonetic_Alphabe...

> IPA chart with audio: https://en.wikipedia.org/wiki/IPA_vowel_chart_with_audio

> All of the IPA consonant chart played as a video: "International Phonetic Alphabet Consonant sounds (Pulmonic)- From Wikipedia.org" https://youtu.be/yFAITaBr6Tw

> I'll have to find the link of the site where they playback youtube videos with multiple languages' subtitles highlighted side-by-side along with the video.

>> [...] Found it: https://www.captionpop.com/

>> It looks like there are a few browser extensions for displaying multiple subtitles as well; e.g. "YouTube Dual Subtitles", "Two Captions for YouTube and Netflix"

Phonics programs really could reference IPA from the start: there are different sounds for the same letters; IPA is the most standard way to indicate how to pronounce words: it's in the old school dictionary, and now it's in the Google "define:" or just "define word" dictionary.

UN Sustainable Development Goal 4: Quality Education: https://www.globalgoals.org/4-quality-education

> Target 4.6: Universal Literacy and Numeracy

> By 2030, ensure that all youth and a substantial proportion of adults, both men and women, achieve literacy and numeracy.

https://sdgs.un.org/goals/goal4 :

> Indicator 4.6.1: Percentage of population in a given age group achieving at least a fixed level of proficiency in functional (a) literacy and (b) numeracy skills, by sex

... Goals, Targets, and Indicators.

Which traversals of a curriculum graph are optimal or sufficient?

You can add https://schema.org/about and https://schema.org/educationalAlignment Linked Data to your [#OER] curriculum resources to increase discoverability, reusability.

Arne-Thompson-Uther Index code URN URIs could be helpful: https://en.wikipedia.org/wiki/Aarne%E2%80%93Thompson%E2%80%9...

> The Aarne–Thompson–Uther Index (ATU Index) is a catalogue of folktale types used in folklore studies.

Are there competencies linked to maybe a nested outline that we typically traverse in depth-first order? https://github.com/todotxt/todo.txt : Todo.txt format has +succinct @context labels. Some way to record and score our own paths objectively would be great.

There exist books about raising a read-aloud family; promoting a culture of randomly reading aloud. To whoever, for example.

Writing letters, too.

> What are some of the more evidence-based (?) (early literacy,) reading curricula? OTOH: LETRS, Heggerty, PAL

Looks like there are only 21 search results for: "LETRS" "Fundation" "Heggerty": https://www.google.com/search?q="LETRS"+"fundation"+"heggert...

What is the name for this category of curricula?

Perhaps the US Department of Education or similar could compare early reading programs in a wiki[pedia] page, according to criteria to include measures of evidence-basedness? Just like https://collegescorecard.ed.gov/data/ has "aggregate data for each institution [&] Includes information on institutional characteristics, enrollment, student aid, costs, and student outcomes."

From YouTube, it looks like there are cool hand motions for Heggerty.

[-]

An Opinionated Guide to Xargs

Wanting verbose logging from xargs, years ago I wrote a script called `el` (edit lines) that basically does `xargs -0` with logging. https://github.com/westurner/dotfiles/blob/develop/scripts/e...

It turns out that e.g. -print0 and -0 are the only safe way: line endings aren't escaped:

    find . -type f -print0 | el -0 --each -x echo
GNU Parallel is a much better tool: https://en.wikipedia.org/wiki/GNU_parallel

[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[-]

Enhanced Support for Citations on GitHub

> CITATION.cff files are plain text files with human- and machine-readable citation information. When we detect a CITATION.cff file in a repository, we use this information to create convenient APA or BibTeX style citation links that can be referenced by others.

https://schema.org/ScholarlyArticle RDFa and JSON-LD can be parsed with a standard Linked Data parser. Looks like YAML-LD requires quoting e.g. "@context": and "@id":

From https://docs.github.com/en/github/creating-cloning-and-archi... ; in your repo's /CITATION.cff:

  cff-version: 1.2.0
  message: "If you use this software, please cite it as below."
  authors:
  - family-names: "Lisa"
    given-names: "Mona"
    orcid: "https://orcid.org/0000-0000-0000-0000"
  - family-names: "Bot"
    given-names: "Hew"
    orcid: "https://orcid.org/0000-0000-0000-0000"
  title: "My Research Software"
  version: 2.0.4
  doi: 10.5281/zenodo.1234
  date-released: 2017-12-18
  url: "https://github.com/github/linguist"
https://citation-file-format.github.io/

[-]

Canada calls screen scraping ‘unsecure,’ sets Open Banking target for 2023

[+]
[+]
[+]
[+]
[+]
[+]

AFAIU, there are still zero (0) consumer banking APIs with Read-Only e.g. OAuth APIs in the US as well?

Banks could save themselves CPU, RAM, bandwidth, and liability by implementing read-only API tokens and methods that need only return JSON - instead of HTML or worse, monthly PDF tables for a fee - possibly similar to the Plaid API: https://plaid.com/docs/api/

There is competition in consumer/retail banking, but still the only way to do e.g. budget and fraud analysis with third party apps is to give away all authentication factors: u/p/sqa; and TBH that's unacceptable.

Traditional and distributed ledger service providers might also consider W3C ILP: Interledger Protocol (in starting their move to quantum-resistant ledgers by 2022 in order to have a 5 year refresh cycle before QC is a real risk by 2027, optimistically, for science) when reviewing the entropy of username+password_hash+security_question_answer strings in comparison to the entropy of cryptoasset account public key hash strings: https://interledger.org/developer-tools/get-started/overview...

> Sender – Initiates a value transfer.

> Router (Connector) – Applies currency exchange and forwards packets of value. This is an intermediary node between the sender and the receiver. {MSB: KYC, AML, 10k reporting requirement, etc}

> Receiver – Receives the value

Multifactor authentication: Something you have, something you know, something you are

Multisig: n-of-m keys required to approve a transaction

Edit: from "Fed announces details of new interbank service to support instant payments" https://news.ycombinator.com/item?id=24109576 :

> For purposes of Interledger, we call all settlement systems ledgers. These can include banks, blockchains, peer-to-peer payment schemes, automated clearing house (ACH), mobile money institutions, central-bank operated real-time gross settlement (RTGS) systems, and even more. […]

> You can envision the Interledger as a graph where the points are individual nodes and the edges are accounts between two parties. Parties with only one account can send or receive through the party on the other side of that account. Parties with two or more accounts are connectors, who can facilitate payments to or from anyone they're connected to.

> Connectors [AKA routers] provide a service of forwarding packets and relaying money, and they take on some risk when they do so. In exchange, connectors can charge fees and derive a profit from these services. In the open network of the Interledger, connectors are expected to compete among one another to offer the best balance of speed, reliability, coverage, and cost.

W3C ILP: Interledger Protocol > Peering, Clearing and Settling: https://interledger.org/rfcs/0032-peering-clearing-settlemen...

> Hopefully individuals will be able to use the Open Banking APIs to access their own data directly, but it looks like accreditation will be required, so probably not.

When you loan your money to a bank by depositing ledger dollars or cash - and they, since GLBA in 1999, invest it and offer less than a 1% checking interest rate - and they won't even give you the record of all of your transactions as CSV/OFX `SELECT * FROM transactions WHERE account_id=?`, you have to pay $20/mo per autogenerated PDF containing a table of transactions to scrape with e.g. PDFminer (because they don't keep all account history data online)?

Seemingly OT, but not. APIs for comparison here:

FinTS / HBCI: Home Banking Computer Information protocol https://en.wikipedia.org/wiki/FinTS

E.g. GNUcash (open source double-entry accounting software) supports HBCI (and QIF (Quicken format), and OFX (Open Financial Exchange)). https://www.gnucash.org/features.phtml

HBCI/FinTS has been around in Germany for quite awhile but nowhere else has comparable banking standards? I.e. Plaid may (unfortunately, due to lack of read-only tokens across the entire US consumer banking industry) be the most viable option for implementing HBCI-like support in GNUcash

OpenBanking API Specifications: https://standards.openbanking.org.uk/api-specifications/

Web3 (Ethereum,) APIs: https://web3py.readthedocs.io/en/stable/web3.main.html#rpc-a...

ISO20022 is "A single standardisation approach (methodology, process, repository) to be used by all financial standards initiatives" https://www.iso20022.org/

Brazil's PIX is one of the first real implementers of ISO20022. A note regarding such challenges: https://news.ycombinator.com/item?id=24104351

What data format does the FTC CAT Consolidated Audit Trail expect to receive mandatory financial reporting information in? Could ILP simplify banking and financial reporting at all?

FWIU, RippleNet (?) is the only network that supports attachments of e.g. line-item invoices (that we'd all like to see in the interest of transparency and accountability in government spending).

W3C ILP: Interledger Protocol. See links above.

Of the specs in this loose category, only cryptoledgers do not depend upon (DNS or) TLS/SSL - at the protocol layer, at least - and every CA in the kept-up-to-date trusted CA cert bundle (that could be built from a CT Certificate Transparency log of cert issuance and revocation events kept in a blockchain or e.g. centralized google/trillian, which they have the trusted sole root and backup responsibilities for).

Though, the DNS dependency has probably crept back into e.g. the bitcoind software by now (which used to bootstrap its list of peer nodes (~UNL) from an IRC IP address instead of a DNS domain).

FWIU, each trusted ACH (US 'Direct Deposit') party has a (one) GPG key that they use to sign transaction documents sent over now (S)FTP on scout's honor - on behalf of all of their customers' accounts.

[-]

Interactive Linear Algebra (2019)

[+]

https://github.com/topics/linear-algebra?l=jupyter+notebook lists "Computational Linear Algebra for Coders" https://github.com/fastai/numerical-linear-algebra

"site:GitHub.com inurl:awesome linear algebra jupyter" lists a few awesome lists with interactive linear algebra resources: https://www.google.com/search?q=site%3Agithub.com+inurl%3Aaw...

3blue1brown's "Essence of linear algebra" playlist has some excellent tutorials with intuition-building visualizations built with manim: https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFit...

https://github.com/ManimCommunity/manim

[-]

Git password authentication is shutting down

[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]

`git pull --rebase` usually is what I need to do. To save local changes and rebase to the git remote's branch:

  # git branch -av;
  # git remote -v;
  # git reflog; git help reflog; man git-reflog
  # git show HEAD@{0}
  # git log -n5 --graph;
  git add -A; git status;
  git stash; git stash list;
  git pull --rebase;
  #git pull --rebase origin develop
  # git fetch origin develop
  # git rebase origin/develop
  git stash pop;
  git stash list;
  git status;
  # git commit
  # git rebase -i HEAD~5 # squash
  # git push
HubFlow does branch merging correctly because I never can. Even when it's just me and I don't remember how I was handling tags of releases on which branch, I just reach for HubFlow now and it's pretty much good.

There's a way to default to --rebase for pulls: is there a reason not to set that in a global gitconfig? Edit: From https://stackoverflow.com/questions/13846300/how-to-make-git... :

> There are now 3 different levels of configuration for default pull behaviour. From most general to most fine grained they are: […]

  git config --global pull.rebase true

[-]

A future for SQL on the web

[+]
[+]

TIL, about Graph "Protocol for building decentralized applications quickly on Ethereum" https://github.com/graphprotocol

https://thegraph.com/docs/indexing

> Indexers are node operators in The Graph Network that stake Graph Tokens (GRT) in order to provide indexing and query processing services. Indexers earn query fees and indexing rewards for their services. They also earn from a Rebate Pool that is shared with all network contributors proportional to their work, following the Cobbs-Douglas Rebate Function.

> GRT that is staked in the protocol is subject to a thawing period and can be slashed if Indexers are malicious and serve incorrect data to applications or if they index incorrectly. Indexers can also be delegated stake from Delegators, to contribute to the network.

> Indexers select subgraphs to index based on the subgraph’s curation signal, where Curators stake GRT in order to indicate which subgraphs are high-quality and should be prioritized. Consumers (eg. applications) can also set parameters for which Indexers process queries for their subgraphs and set preferences for query fee pricing.

It's Ethereum though, so it's LevelDB, not SQLite on IndexedDB on SQLite.

[-]

Show HN: Python Source Code Refactoring Toolkit via AST

[+]
[+]
[+]

Did you consider PyCQA/RedBaron (which is based upon PyCQA/baron, an AST implementation which preserves comments and whitespace)? https://redbaron.readthedocs.io/en/latest/

[+]

Rog. I think CodeQL (GitHub acquired Semmle and QL in 2019) supports those types of queries; probably atop lib2to3 as well. https://codeql.github.com/docs/writing-codeql-queries/introd...

From https://news.ycombinator.com/item?id=24511280 :

> Additional lists of static analysis, dynamic analysis, SAST, DAST, and other source code analysis tools […]

[-]

Emacs' org-mode gets citation support

FWIW, Jupyter-book handles Citations and bibliographies with sphinxcontrib-bibtex: https://jupyterbook.org/content/citations.html

Some notes about Zotero and Schema.org RDFa for publishing [CSL with citeproc] citations: references of Linked Data resources in a graph, with URIs all: https://wrdrd.github.io/docs/tools/index#zotero-and-schema-o...

Compared to trying to parse beautifully typeset bibliographies in PDFs built from LaTeX with a Computer Modern font, search engines can more easily index e.g. https://schema.org/ScholarlyArticle linked data as RDFa, Microdata, or JSON-LD.

Scholarly search engines: Google Scholar, Semantic Scholar, Meta.org,

[-]

NSA Kubernetes Hardening Guidance [pdf]

[+]
[+]
[+]
[+]
[+]
[+]

Looks like there's actually a "summary of the key recommendations from each section" on page 2.

> Works cited:

> [1] Center for Internet Security, "Kubernetes," 2021. [Online]. Available: https://cisecurity.org/resources/?type=benchmark&search=kube... .

> [2] DISA, "Kubernetes STIG," 2021. [Online]. Available: https://dl.dod.cyber.mil.wp- content/uploads/stigs/zip/U_Kubernetes_V1R1_STIG.zip. [Accessed 8 July 2021]

> [3] The Linux Foundation, "Kubernetes Documentation," 2021. [Online]. Available: https://kubernetes.io/docs/home/ . [Accessed 8 July 2021].

> [4] The Linux Foundation, "11 Ways (Not) to Get Hacked," 18 07 2018. [Online]. Available: https://kubernetes.io/blog/2018/07/18/11-ways-not-to-get-hac... . [Accessed 8 July 2021].

> [5] MITRE, "Unsecured Credentials: Cloud Instance Metadata API." MITRE ATT&CK, 2021. [Online]. Available: https://attack.mitre.org/techniques/T1552/005/. [Accessed 8 July 2021].

> [6] CISA, "Analysis Report (AR21-013A): Strengthening Security Configurations to Defend Against Attackers Targeting Cloud Services." Cybersecurity and Infrastructure Security Agency, 14 January 2021. [Online]. Available:https://us- cert.cisa.gov/ncas/analysis-reports/ar21-013a [Accessed 8 July 2021].

How can k8s and zero-trust cooccur?

> CISA encourages administrators and organizations review NSA’s guidance on Embracing a Zero Trust Security Model to help secure sensitive data, systems, and services.

"Embracing a Zero Trust Security Model" (2021, as well) https://media.defense.gov/2021/Feb/25/2002588479/-1/-1/0/CSI...

In addition to "zero [trust]", I also looked for the term "SBOM". From p.32//39:

> As updates are deployed, administrators should also keep up with removing any old components that are no longer needed from the environment. Using a managed Kubernetes service can help to automate upgrades and patches for Kubernetes, operating systems, and networking protocols. *However, administrators must still patch and upgrade their containerized applications.*

"Existing artifact vuln scanners, databases, and specs?" https://github.com/google/osv/issues/55

[-]

Hosting SQLite Databases on GitHub Pages

[+]
[+]

> Methods for remotely accessing/paging data in from a client when a complete download of the dataset is unnecessary:

> - Query e.g. parquet on e.g. GitHub with DuckDB: duckdb/test_parquet_remote.test https://github.com/duckdb/duckdb/blob/6c7c9805fdf1604039ebed...

> - Query sqlite on e.g. GitHub with SQLite: [Hosting SQLite databases on Github Pages - (or any static file hoster) - phiresky's blog](...)

>> The above query should do 10-20 GET requests, fetching a total of 130 - 270KiB, depending on if you ran the above demos as well. Note that it only has to do 20 requests and not 270 (as would be expected when fetching 270 KiB with 1 KiB at a time). That’s because I implemented a pre-fetching system that tries to detect access patterns through three separate virtual read heads and exponentially increases the request size for sequential reads. This means that index scans or table scans reading more than a few KiB of data will only cause a number of requests that is logarithmic in the total byte length of the scan. You can see the effect of this by looking at the “Access pattern” column in the page read log above.

> - bittorrent/sqltorrent https://github.com/bittorrent/sqltorrent

>> Sqltorrent is a custom VFS for sqlite which allows applications to query an sqlite database contained within a torrent. Queries can be processed immediately after the database has been opened, even though the database file is still being downloaded. Pieces of the file which are required to complete a query are prioritized so that queries complete reasonably quickly even if only a small fraction of the whole database has been downloaded.

>> […] Creating torrents: Sqltorrent currently only supports torrents containing a single sqlite database file. For efficiency the piece size of the torrent should be kept fairly small, around 32KB. It is also recommended to set the page size equal to the piece size when creating the sqlite database

Would BitTorrent be faster over HTTP/3 (UDP) or is that already a thing for web seeding?

> - https://web.dev/file-system-access/

> The File System Access API: simplifying access to local files: The File System Access API allows web apps to read or save changes directly to files and folders on the user’s device

Hadn't seen wilsonzlin/edgesearch, thx:

> Serverless full-text search with Cloudflare Workers, WebAssembly, and Roaring Bitmaps https://github.com/wilsonzlin/edgesearch

>> How it works: Edgesearch builds a reverse index by mapping terms to a compressed bit set (using Roaring Bitmaps) of IDs of documents containing the term, and creates a custom worker script and data to upload to Cloudflare Workers

[+]

Thanks. There likely are relative advantages to HTTP/3 QUIC. Here's this from Wikipedia:

> Both HTTP/1.1 and HTTP/2 use TCP as their transport. HTTP/3 uses QUIC, a transport layer network protocol which uses user space congestion control over the User Datagram Protocol (UDP). The switch to QUIC aims to fix a major problem of HTTP/2 called "head-of-line blocking": because the parallel nature of HTTP/2's multiplexing is not visible to TCP's loss recovery mechanisms, a lost or reordered packet causes all active transactions to experience a stall regardless of whether that transaction was impacted by the lost packet. Because QUIC provides native multiplexing, lost packets only impact the streams where data has been lost.

And HTTP Pipelining / Multiplexing isn't specified by just UDP or QUIC:

> HTTP/1.1 specification requires servers to respond to pipelined requests correctly, sending back non-pipelined but valid responses even if server does not support HTTP pipelining. Despite this requirement, many legacy HTTP/1.1 servers do not support pipelining correctly, forcing most HTTP clients to not use HTTP pipelining in practice.

> Time diagram of non-pipelined vs. pipelined connection The technique was superseded by multiplexing via HTTP/2,[2] which is supported by most modern browsers.[3]

> In HTTP/3, the multiplexing is accomplished through the new underlying QUIC transport protocol, which replaces TCP. This further reduces loading time, as there is no head-of-line blocking anymore https://en.wikipedia.org/wiki/HTTP_pipelining

[-]

Ask HN: Any good resources on how to be a great technical advisor to startups?

Bumping up https://news.ycombinator.com/item?id=27600539

## Codelabels: Component: title

### ENH,UBY: HN: linkify URIs in descriptions

## User Stories

Users {__, __, } can ___ in order to ___.

Given-When-Then

~ Who-What-Wow

~ {Marketing, Training, Support, Service} Curriculum Competencies

### Users can click on links in descriptions in order to review referenced off-site resources.

Costs/Benefits: Linkspam?

The URL from this {item,} description: https://news.ycombinator.com/item?id=27600539

[-]

Teaching other teachers how to teach CS better

https://code.org/teach

git and HTML and Linked Data should be requisite: https://learngitbranching.js.org/

Pedagogy#Modern_pedagogy: https://en.wikipedia.org/wiki/Pedagogy#Modern_pedagogy

Evidence-based_education: https://en.wikipedia.org/wiki/Evidence-based_education

Computational_thinking#Characteristics: https://en.wikipedia.org/wiki/Computational_thinking#Charact... (Abstraction, Automation, Analysis)

Learning: https://en.wikipedia.org/wiki/Learning

Autodidacticism: https://en.wikipedia.org/wiki/Autodidacticism

Design of Experiments; Hypotheses, troubleshooting, debugging, automated testing, Formal Methods, actual Root Cause Analysis: https://en.wikipedia.org/wiki/Design_of_experiments

Critical Thinking; definitions, Logic and Rationality, Logical Reasoning: Deduction, Abduction and Induction: https://en.wikipedia.org/wiki/Critical_thinking#Logic_and_ra...

Doesn't this all derive from [Quantum] Information Theory? It's actually fascinating to start at Information Theory; who knows what that curriculum would look like without reinforcement and [3D] videos: https://en.wikipedia.org/wiki/Information_theory

Stone, James V. "Information theory: a tutorial introduction." (2015). https://scholar.google.com/scholar?q=%22Information+Theory:+...

It used to be that we had to start engines with a turn of a crank: that initial energy to overcome inertia was enough for the system to feed-forward without additional reinforcement. Effective CS instruction may motivate the unmotivated to care about learning the way folks who are receiving reinforcement do: intrinsically.

[-]

Ask HN: Best online speech / public speaking course?

Hi HN - Has anyone taken an online course to help them with public speaking, speech and voice skills that they’d highly recommend? Thanks!

"TED Talks: The Official TED Guide to Public Speaking" https://smile.amazon.com/TED-Talks-Official-Public-Speaking-...

TED Masterclass: https://masterclass.ted.com/

"Power Talk: Using Language to Build Authority and Influence" https://smile.amazon.com/Power-Talk-Language-Authority-Influ...

Re: Clean Language and Symbolic Modeling; listening to metaphors and asking clean questions may be a more effective way to facilitate change: https://westurner.github.io/hnlog/#comment-15471868

/? greatest speeches: https://m.youtube.com/results?sp=mAEA&search_query=Greatest+...

"Lend Me Your Ears: Great Speeches in History" by William Safire. https://a.co/8svyoUw

E.g. "The Prosperity Bible: The Greatest Writings of All Time on the Secrets to Wealth and Prosperity" (Napoleon Hill, PT Barnum, Dale Carnegie, Gibran, Benjamin Franklin; 5000+ pages). https://a.co/b8Ej6o7

Talking points: Peaceful coexistence, #GlobalGoals 1-17 (UN SDGs), "Limits to Growth: The 30-Year Update" by Donella H. Meadows. https://a.co/7MgO0bv

[-]

Google sunsets the APK format for new Android apps

I was just trying to explain this the other day. Not sure whether to be disappointed in is this a regression? No, bros, you may not just `repack it` and re-sign the package for me. That's not how it should work unless I trust their build server to sign for me; and I don't and we shouldn't. I'll just CC this here from https://westurner.github.io/hnlog/#comment-27410978 :

```

> Unfortunately all packages aren't necessarily signed either; "Which package managers require packages to be cryptographically signed?" is similar to "Which DNS clients can operate DNS resolvers that require DNSSEC signatures on DNS records to validate against the distributed trust anchors?".

> FWIW, `delv pkg.mirror.server.org` is how you can check DNSSEC:

  man systemd-resolved # nmcli
  man delv
  man dnssec-trust-anchors.d

  delv pkg.mirror.server.org
> Sigstore is a free and open Linux Foundation service for asset signatures: https://sigstore.dev/what_is_sigstore/

> The TUF Overview explains some of the risks of asset signature systems; key compromise, there's one key for everything that we all share and can't log the revocation of in a CT (Certificate Transparency) log distributed like a DLT, https://theupdateframework.io/overview/

> Certificate Transparency: https://en.wikipedia.org/wiki/Certificate_Transparency

> Yeah, there's a channel to secure there at that layer of the software supply chain as well.

> "PEP 480 -- Surviving a Compromise of PyPI: End-to-end signing of packages" (2014-) https://www.python.org/dev/peps/pep-0480/

>> Proposed is an extension to PEP 458 that adds support for end-to-end signing and the maximum security model. End-to-end signing allows both PyPI and developers to sign for the distributions that are downloaded by clients. The minimum security model proposed by PEP 458 supports continuous delivery of distributions (because they are signed by online keys), but that model does not protect distributions in the event that PyPI is compromised. In the minimum security model, attackers who have compromised the signing keys stored on PyPI Infrastructure may sign for malicious distributions. The maximum security model, described in this PEP, retains the benefits of PEP 458 (e.g., immediate availability of distributions that are uploaded to PyPI), but additionally ensures that end-users are not at risk of installing forged software if PyPI is compromised.

> One W3C Linked Data way to handle https://schema.org/SoftwareApplication ( https://codemeta.github.io/user-guide/ ) cryptographic signatures of a JSON-LD manifest with per-file and whole package hashes would be with e.g. W3C ld-signatures/ld-proofs and W3C DID (Decentralized Identifiers) or x.509 certs in a CT log.

```

FWIU, the Fuschia team is building package signing on top of TUF.

[-]

A from-scratch tour of Bitcoin in Python

[+]

> The 'dumbcoin' jupyter notebook is also a good reference: "Dumbcoin - An educational python implementation of a bitcoin-like blockchain" https://nbviewer.jupyter.org/github/julienr/ipynb_playground...

https://github.com/yjjnls/awesome-blockchain#implementation-... and https://github.com/openblockchains/awesome-blockchains#pytho... list a few more ~"blockchain from scratch" [in Python] examples.

... FWIU, Ethereum has the better Python story. There was a reference implementation of Ethereum in Python? https://ethereum.org/en/developers/docs/programming-language...

[-]

An Omega-3 that’s poison for cancer tumors

[+]
[+]

Fish don't synthesize Omega PUFAs, they eat algae (which unfortunately and inopportunely stains teeth)

From "Warning: Combination of Omega-3s in Popular Supplements May Blunt Heart Benefits" https://scitechdaily.com/warning-combination-of-omega-3s-in-... :

> Now, new research from the Intermountain Healthcare Heart Institute in Salt Lake City finds that higher EPA blood levels alone lowered the risk of major cardiac events and death in patients, while DHA blunted the cardiovascular benefits of EPA. Higher DHA levels at any level of EPA, worsened health outcomes.

> Results of the Intermountain study, which examined nearly 1,000 patients over a 10-year-period,

> “Based on these and other findings, we can still tell our patients to eat Omega-3 rich foods, but we should not be recommending them in pill form as supplements or even as combined (EPA + DHA) prescription products,” he said. “Our data adds further strength to the findings of the recent REDUCE-IT (2018) study that EPA-only prescription products reduce heart disease events.”

Now they're sayin'; so I go look for an EPA-only supplement, and TIL about re-esterified triglyceride and it says it's molecularly distilled anchovies in blister packages. Which early land mammals probably ate, so.

[+]
[+]
[-]

Discover and Prevent Linux Kernel Zero-Day Exploit Using Formal Verification

[Coq, VST, CompCert]

Formal methods: https://en.wikipedia.org/wiki/Formal_methods

Formal specification: https://en.wikipedia.org/wiki/Formal_specification

Implementation of formal specification: https://en.wikipedia.org/wiki/Anti-pattern#Software_engineer...

Formal verification: https://en.wikipedia.org/wiki/Formal_verification

From "Why Don't People Use Formal Methods?" https://news.ycombinator.com/item?id=18965964 :

> Which universities teach formal methods?

> - q=formal+verification https://www.class-central.com/search?q=formal+verification

> - q=formal+methods https://www.class-central.com/search?q=formal+methods

> Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs?

Can there still be side channel attacks in formally verified systems? Can e.g. TLA+ help with that at all?

[+]
[-]

Anatomy of a Linux DNS Lookup

[+]
[+]

Is there a good example of a Linux package that does this correctly?

[+]
[+]

Yeah, but if you regress to 'legacy DNS' by removing systemd-resolved then there's no good way to do per-interface DNS (~client-split DNS), or (optionally) validate DNSSEC, or do DoH/DoT; and then nothing respawns and logs consistently-timestamped process events of substitute network service processes.

FWIU, per-user DNS configs are still elusive. Per-user DNS would make it easier to use family-safe DNS (that redirects to family-safe e.g. SafeSearch domains) by default; some forums are essential for system administration.

[+]

Your system may also depend upon one or more package managers that do all depend upon DNS (and hopefully e.g. DNSSEC and DoH/DoT)

[+]

Unfortunately all packages aren't necessarily signed either; "Which package managers require packages to be cryptographically signed?" is similar to "Which DNS clients can operate DNS resolvers that require DNSSEC signatures on DNS records to validate against the distributed trust anchors?".

FWIW, `delv pkg.mirror.server.org` is how you can check DNSSEC:

  man systemd-resolved # nmcli
  man delv
  man dnssec-trust-anchors.d

  delv pkg.mirror.server.org
Sigstore is a free and open Linux Foundation service for asset signatures: https://sigstore.dev/what_is_sigstore/

The TUF Overview explains some of the risks of asset signature systems; key compromise, there's one key for everything that we all share and can't log the revocation of in a CT (Certificate Transparency) log distributed like a DLT, https://theupdateframework.io/overview/

Certificate Transparency: https://en.wikipedia.org/wiki/Certificate_Transparency

Yeah, there's a channel to secure there at that layer of the software supply chain as well.

"PEP 480 -- Surviving a Compromise of PyPI: End-to-end signing of packages" (2014-) https://www.python.org/dev/peps/pep-0480/

> Proposed is an extension to PEP 458 that adds support for end-to-end signing and the maximum security model. End-to-end signing allows both PyPI and developers to sign for the distributions that are downloaded by clients. The minimum security model proposed by PEP 458 supports continuous delivery of distributions (because they are signed by online keys), but that model does not protect distributions in the event that PyPI is compromised. In the minimum security model, attackers who have compromised the signing keys stored on PyPI Infrastructure may sign for malicious distributions. The maximum security model, described in this PEP, retains the benefits of PEP 458 (e.g., immediate availability of distributions that are uploaded to PyPI), but additionally ensures that end-users are not at risk of installing forged software if PyPI is compromised.

One W3C Linked Data way to handle https://schema.org/SoftwareApplication ( https://codemeta.github.io/user-guide/ ) cryptographic signatures of a JSON-LD manifest with per-file and whole package hashes would be with e.g. W3C ld-signatures/ld-proofs and W3C DID (Decentralized Identifiers) or x.509 certs in a CT log.

[-]

JupyterLite – WASM-powered Jupyter running in the browser

[+]
[+]
[+]
[+]

From https://news.ycombinator.com/item?id=24052393 re: Starboard:

> https://developer.mozilla.org/en-US/docs/Web/Security/Subres... : "Subresource Integrity (SRI) is a security feature that enables browsers to verify that resources they fetch (for example, from a CDN) are delivered without unexpected manipulation. It works by allowing you to provide a cryptographic hash that a fetched resource must match."

> There's a new Native Filesystem API: "The new Native File System API allows web apps to read or save changes directly to files and folders on the user's device." https://web.dev/native-file-system/

> We'll need a way to grant specific URLs specific, limited amounts of storage.

[...]

> https://github.com/deathbeds/jyve/issues/46 :

> Would [Micromamba] and conda-forge build a WASM architecture target?

[-]

Accenture, GitHub, Microsoft and ThoughtWorks Launch the GSF

> With data centers around the world accounting for 1% of global electricity demand, and projections to consume 3-8% in the next decade, it’s imperative we address this as an industry.

> To help in that endeavor, we’re excited to announce the formation of The Green Software Foundation – a nonprofit founded by Accenture, GitHub, Microsoft and ThoughtWorks established with the Linux Foundation and the Joint Development Foundation Projects LLC to build a trusted ecosystem of people, standards, tooling and leading practices for building green software. The Green Software Foundation was born out of a mutual desire and need to collaborate across the software industry. Organizations with a shared commitment to sustainability and an interest in green software development principles are encouraged to join the foundation to help grow the field of green software engineering, contribute to standards for the industry, and work together to reduce the carbon emissions of software. The foundation aims to help the software industry contribute to the information and communications technology sector’s broader targets for reducing greenhouse gas emissions by 45% by 2030, in line with the Paris Climate Agreement.

Here's to now hand-optimized efficient EC, SHA-256, SHA-3, and Scrypt routines due to incentives. See also The Crypto Climate Accord, which is also inspired by the Paris Agreement: https://cryptoclimate.org/

... "Thermodynamics of Computation Wiki" https://news.ycombinator.com/item?id=18146854

Is 100% offset by PPAs always 200% Green?

From "Ask HN: What jobs can a software engineer take to tackle climate change?" https://news.ycombinator.com/item?id=20015801 :

> [ ] We should create some sort of a badge and structured data (JSONLD, RDFa, Microdata) for site headers and/or footers that lets consumers know that we're working toward '200% green' so that we can vote with our money.

[+]

No, under the Paris Agreement, countries set voluntary targets for themselves and regularly reassess.

[+]

TBF, the glut of [Chinese,] solar panels has significantly helped lower the cost of renewables; which is in everyone's interest.

[+]

"What are you doing to help solve that problem?"

[-]

Rocky Linux releases its first release candidate

[+]
[+]
[+]
[+]

Would Rocky Linux be an option for CERN?

I'm assuming the Centos 8 install instructions for e.g. GitLab also work with Rocky Linux? Conda/Micromamba definitely should.

[-]

USB-C is about to go from 100W to 240W, enough to power beefier laptops

What are the costs to add a USB PD module to an electronic device? https://hackaday.com/2021/04/21/easy-usb‑c-power-for-all-you...

- [ ] Create an industry standard interface for charging and using [power tool,] battery packs; and adapters

[-]

Half-Double: New hammering technique for DRAM Rowhammer bug

From "Rowhammer for qubits: is it possible?" https://amp.reddit.com/r/quantum/comments/7osud4/rowhammer_f... :

> Sometimes bits just flip due to "cosmic rays"; or, logically, also due to e.g. neutron beams and magnetic fields.

> With rowhammer, there are read/write (?) access patterns which cause predictable-enough information "leakage" to be useful for data exfiltration and privilege escalation.

> With the objective of modeling qubit interactions using quantum-mechanical properties of fields of electrons in e.g. DRAM, Is there a way to use DRAM electron "soft errors" to model quantum interactions; to build a quantum computer from what we currently see as errors in DRAM?

> If not with current DRAM, could one apply a magnetic field to DRAM in order to exploit quantum properties of electrons moving in a magnetic field?

https://en.wikipedia.org/wiki/DRAM

https://en.wikipedia.org/wiki/Row_hammer

https://en.wikipedia.org/wiki/Soft_error

https://en.wikipedia.org/wiki/Crosstalk

> [...] are there DRAM read/write patterns which cause errors due to interference which approximate quantum logic gates? Probably not, but maybe; especially with an applied magnetic field (which then isn't the DRAM sitting on our desks, it's then DRAM + a constant or variable field).

> I suppose to test this longshot theory, one would need to fuzz low-level RAM loads and search for outputs that look like quantum gate outputs. Or, monitor normal workloads which result in RAM faults which approximate quantum logic gate outputs and train a network to recognize the features.

> I am reminded of a recent approach to in-RAM computing that's not memristors.

> Soft errors caused by cosmic rays are obviously more frequent at higher altitudes (and outside of the Van Allen radiation belt).

Thought I'd ask this here as well.

Quantum tunneling was the perceived barrier at like DDR5 and higher densities FWIU? Barring new non-electron-based tech, how can we prevent adjacent electrons from just flipping at that gate grid gap size?

Other Quantum-on-Silicon approaches have coherence issues, too

[-]

Setting up a Raspberry Pi with 2 Network Interfaces as a simple router

[+]
[+]
[+]

> This page shows devices which have a LTE modem built in and are supported by OpenWrt.

https://openwrt.org/toh/views/toh_lte_modem_supported

It looks like this table is neither current nor complete though. And there's a different table of OpenWRT compatible devices that have a battery as well.

> [The Amarok (GL-X1200) Industrial IoT Gateway has] 2x SIM card slots for 2x 4G LTE modems (probably miniPCI-E so maybe upgradeable to 5G later), external antenna connectors for the LTE modems, MicroSD, #OpenWRT: https://store.gl-inet.com/collections/4g-smart-router/produc...

The Turris Omnia also has 4G LTE SIM card support (and LXC in their OpenWRT build). https://openwrt.org/toh/turris/turris_omnia

There's also a [Dockerized] x86 build of OpenWRT that probably also supports Mini PCI-E modules for 4G LTE, LoRa, and 5G. Route metrics determine which [gateway] route is tried first.

From "How much total throughput can your wi-fi router really provide?" https://news.ycombinator.com/item?id=26596395 :

> In 2021, most routers - even with OpenWRT and hardware-offloading - cannot actually push 1 Gigabit over wired Ethernet, though the port spec does say 1000 Mbps

[-]

What to do about GPU packages on PyPI?

[+]
[+]

[Huge GPU] packages can be cached locally: persist ~/.cache/pip between builds with e.g. Docker, run a PyPI caching proxy,

"[Discussions on Python.org] [Packaging] Draft PEP: PyPI cost solutions: CI, mirrors, containers, and caching to scale" https://discuss.python.org/t/draft-pep-pypi-cost-solutions-c...

> Continuous Integration automated build and testing services can help reduce the costs of hosting PyPI by running local mirrors and advising clients in regards to how to efficiently re-build software hundreds or thousands of times a month without re-downloading everything from PyPI every time.

[...]

> Request from and advisory for CI Services and CI Implementors:

> Dear CI Service,

> - Please consider running local package mirrors and enabling use of local package mirrors by default for clients’ CI builds.

> - Please advise clients regarding more efficient containerized software build and test strategies.

> Running local package mirrors will save PyPI (the Python Package Index, a service maintained by PyPA, a group within the non-profit Python Software Foundation) generously donated resources. (At present (March 2020), PyPI costs ~ $800,000 USD a month to operate; even with generously donated resources).

Looks like the current figure is significantly higher than $800K/mo for science.

How to persist ~/.cache/pip between builds with e.g. Docker in order to minimize unnecessary GPU package re-downloads:

  RUN --mount=type=cache,target=/root/.cache/pip

  RUN --mount=type=cache,target=/home/appuser/.cache/pip

[+]
[-]

Markdown Notes VS Code extension: Navigate notes with [[wiki-links]]

> Syntax highlighting for #tags.

What's the best way to search for #tags with VS Code? Are #tags indexed into an e.g. ctags file within a project or a directory?

> @bibtex-citations: Use pandoc-style citations in your notes (eg @author_title_year) to get syntax highlighting, autocompletion and go to definition, if you setup a global BibTeX file with your references.

[+]

Thanks, yeah. Is there anything that does stemming or at least depluralization of the word around the cursor or the full selection before brute searching for it?

[-]

Ask HN: Choosing a language to learn for the heck of it

I'm a technical manager, which means I do a lot of administrative stuff and a little coding. The coding has become a nice distraction when I need to take a break.

For "real work" I write mostly Python, a lot of SQL, a little bit of Go, and some shell scripting to glue it together. I'd like to learn something I have no need of for work. If it becomes useful later, that is OK, but not a goal. The goal is in creating something just for fun. That something is undefined, so general purpose languages are the population.

I have become curious lately in Nim, Crystal, and Zig. Small, modern, high performance languages. Curiousity comes from the cases when they are mentioned here, sometime for similar reasons I list above.

Nim is on top of the list: Sort of Python like, supported on Windows (I use Win/Mac/Linux), appears to have libraries for the things I do: Process text for insights, play projects would use interesting data instead of business data.

Crystal does not support Windows (yet), but appears to closer to Ruby. Its performance may be a bit better.

Zig came on my radar recently, I know less about it, compared to the little I know of the others.

Suggestions on choosing one as a hobby language?

> Suggestions on choosing one as a hobby language?

IDK how much of a hobby it'd remain, but: Rust compiles to WASM, C++ now has auto and coroutines (and real live memory management)

"Ask HN: Is it worth it to learn C in 2020?" https://news.ycombinator.com/item?id=21878664

[-]

Show HN: Django SQL Dashboard

[+]

This launches the web-based Werkzeug debugger on Exception:

  pip install django-extensions
  python manage.py runserver_plus
https://django-extensions.readthedocs.io/en/latest/runserver...

This should run IPython Notebook with database models already imported :

  python manage.py shell_plus --notebook
But writing fixtures, tests and (celery / dask-labextension) tasks is probably the better way to do things. Django-rest-assured is one way to get a tested REST API with DRF and e.g. factory_boy for generating test data.

[-]

Interactive IPA Chart

Jeud | 2021-05-06 13:33:00 | 243 | # | ^

Is there a [Linked Data] resource with the information in this interactive IPA chart (which is from Wikipedia FWICS) in addition to?:

- phoneme, ns:"US English letter combinations", []

- phoneme, ns:"schema.org/CreativeWorks which feature said phoneme", []

AFAIU, WordNet RDF doesn't have links to any IPA RDFS/OWL vocabulary/ontology yet.

[-]

Google Dataset Search

[+]

Use cases for such [LD: Linked Data] metadata:

1. #StructuredPremises:

> (How do I indicate that this is a https://schema.org/ScholarlyArticle predicated upon premises including this Dataset and these logical propositions?)

2. #LinkedMetaAnalyses; #LinkedResearch "#StudyGraph"

3. [CSVW (Tabular Data Model),] schema.org/Dataset(s) with per column (per-feature) physical quantity and unit URIs with e.g. QUDT and/or https://schema.org/StructuredValue metadata for maximum data reusability.

4. JupyterLab notebooks:

4a. JupyterLab Metadata Service extension: https://github.com/jupyterlab/jupyterlab-metadata-service :

> - displays linked data about the resources you are interacting with in JuyterLab.

> - enables other extensions to register as linked data providers to expose JSON LD about an entity given the entity's URL.

> - exposes linked data to the user as a Linked Data viewer in the Data Browser pane.

4b. JupyterLab Data Explorer: https://github.com/jupyterlab/jupyterlab-data-explorer :

> - Data changing on you? Use RxJS observables to represent data over time.

> - Have a new way to look at your data? Create React or lumino components to view a certain type.

> - Built-in data explorer UI to find and use available datasets.

[-]

Ask HN: Cap Table Service Recommendations

Recent founders, do you have any recommendations for services for managing a cap table? Or do you do it yourself? Any suggestions for how to choose?

[-]

Hosting SQLite databases on GitHub Pages or any static file hoster

[+]
[+]
[+]

This looks pretty efficient. Some chains can be interacted with without e.g. web3.js? LevelDB indexes aren't SQLite.

Datasette is one application for views of read-only SQLite dbs with out-of-band replication. https://github.com/simonw/datasette

There are a bunch of *-to-sqlite utilities in corresponding dogsheep project.

Arrow JS for 'paged' browser client access to DuckDB might be possible and faster but without full SQLite SQL compatibility and the SQLite test suite. https://arrow.apache.org/docs/js/

https://duckdb.org/ :

> Direct Parquet & CSV querying

In-browser notebooks like Pyodide and Jyve have local filesystem access with the new "Filesystem Access API", but downloading/copying all data to the browser for every run of a browser-hosted notebook may not be necessary. https://web.dev/file-system-access/

[+]
[-]

Wasm3 compiles itself (using LLVM/Clang compiled to WASM)

Self-hosting (compilers) https://en.wikipedia.org/wiki/Self-hosting_(compilers) :

> In computer programming, self-hosting is the use of a program as part of the toolchain or operating system that produces new versions of that same program—for example, a compiler that can compile its own source code

[+]

The wikipedia article lists quite a few languages for which there are self-hosting compilers.

JS can already write more JS. Are there advantages and risks introduced by this new capability for browser-hosted (?) WASM LLVM to compile WASM?

[-]

Semgrep: Semantic grep for code

Is there a more complete example of how to call semgrep from pre-commit (which gets called before every git commit) in order to prevent e.g. Python print calls (print(), print \\n(), etc.) from being checked in?

https://semgrep.dev/docs/extensions/ describes how to do pre-commit.

Nvm, here's semgrep's own .pre-commit-config.yml for semgrep itself: https://github.com/returntocorp/semgrep/blob/develop/.pre-co...

[+]

Yeah but that githook will only be installed on that one repo on that one machine. And they may have no or a different version of bash installed (on e.g. MacOS or Windows). IMHO, POSIX-compatible portable shell scripts are more trouble than portable Python scripts.

Pre-commit requires Python and pre-commit to be installed (and then it downloads every hook function).

This fetches the latest version of every hook defined in the .pre-commit-config.yml:

  pre-commit autoupdate
https://pre-commit.com/#pre-commit-autoupdate

A person could easily `ln -s repo/.hooks/hook*.sh repo/.git/hooks/` after every git clone.

[+]
[+]
[+]

IDE plugins are not at all consistent from one IDE to another. Pre-commit is great for teams with different IDEs because all everyone needs to do is:

  [pip,] install pre-commit
  pre-commit install
  # git commit
  #   pre-commit run --all-files

  # pre-commit autoupdate
https://pre-commit.com/

[-]

Ask HN: What to use instead of Bash / Sh for scripting?

I'm at the point where I feel a certain fatigue writing Bash scripts, but I am just not sure of what the alternative is for medium sized (say, ~150-500 LOC) scripts.

The common refrain of "use Python" hasn't really worked fantastically: I don't know what version of Python I'm going to have on the system, installing dependencies is not fun, shelling out when needed is not pleasant, and the size of program always seemingly doubles.

I'm willing to accept something that's not on the system as long as it's one smallish binary that's available in multiple architectures. Right now, I've settled on (ab)using jq, using it whenever tasks get too complex, but I'm wondering if anyone else has found a better way that should also hopefully not be completely a black box to my colleagues?

A configuration management system may have you write e.g. YAML with Jinja2 so that you don't reinvent the idempotent wheel.

It's really easy to write dangerous shell scripts ("${@}" vs ${@} for example) and also easy to write dangerous Python scripts (cmd="{}; {}").

Sarge is one way to use subprocess in Python. https://sarge.readthedocs.io/en/latest/

If you're doing installation and configuration, the most team-maintainable thing is to avoid custom code and work with a configuration management system test runner.

When you "A shell script will be fine, all I have to do is [...]" and then you realize that you need a portable POSIX shell script and to be merged it must have actual automated tests of things that are supposed to run as root - now in a fresh vm/container for testing - and manual verification of `set +xev` output isn't an automated assertion.

> avoid custom code and work with a configuration management system test runner

ansible-molecule is a test runner for Ansible playbooks that can create VMs or containers on local or remote resources.

You can definitely just call shell scripts from Ansible, but the (parallel) script output is only logged after the script returns a return code unless you pipe the script output somewhere and tail that .

> manual verification of `set +xev` output isn't an automated assertion.

From "Bash Error Handling" https://news.ycombinator.com/item?id=24745833 : you can display the line number in `set -x` output by setting $PS4:

  export PS4='+(${BASH_SOURCE}:${LINENO}) '
  set -x
But that's no substitute for automated tests and a test runner that produces e.g. TAP output from test runner results: http://testanything.org/producers.html#shell

[-]

Estonian Electronic Identity Card and Its Security Challenges [pdf]

[+]
[+]
[+]
[+]
[+]
[+]

FWIU, DHS has funded [1] development of e.g W3C DID Decentralized Identifiers [2] and W3C Verifiable Credentials [3]:

[1] https://www.google.com/search?q=site%3Aw3.org+%22funded+by+t...

[2] https://www.w3.org/TR/did-core/

[3] https://www.w3.org/TR/vc-data-model/

Additional notes regarding credentials (certificates, badges, degrees, honorarial degrees, then-evaluated competencies) and capabilities models: https://news.ycombinator.com/item?id=19813340

westurner/blockchain-credential-resources.md: https://gist.github.com/westurner/4345987bb29fca700f52163c33...

Value storage and transmission networks have developed standards and implementations for identity, authentication, and authorization. ILP (Interledger Protocol) RFC 15 specifies "ILP addresses" for [crypto] ledger account IDs: https://interledger.org/rfcs/0015-ilp-addresses/

From "Verifiable Credentials Use Cases" https://w3c.github.io/vc-use-cases/ :

> A verifiable claim is a qualification, achievement, quality, or piece of information about an entity's background such as a name, government ID, payment provider, home address, or university degree. Such a claim describes a quality or qualities, property or properties of an entity which establish its existence and uniqueness. The use cases outlined here are provided in order to make progress toward possible future standardization and interoperability of both low- and high-stakes claims with the goals of storing, transmitting, and receiving digitally verifiable proof of attributes such as qualifications and achievements. The use cases in this document focus on concrete scenarios that the technology defined by the group should address.

FWIU, the US Department of Education is studying or already working with https://blockcerts.org/ for educational credentials.

Here are the open sources of blockchain-certificates/cert-issuer and blockchain-certificates/cert-verifier-js: https://github.com/blockchain-certificates

Might a natural-born resident get a government ID card for passing a recycling and environmental sustainability quiz.

[-]

Systemd makes life miserable, again, this time by breaking DNS

So, I made the mistake of updating my laptop from Fedora 31 to Fedora 33 last night. Normally this is fairly painless, as my laptop is one of the last machines I perform distribution upgrades. Today while doing some pole survey work out in the field, I tethered my laptop to my phone as has been done hundreds of times before. To my surprise, DNS doesn't work anymore, but only in web browsers. Both Firefox and Chrome can't resolve names anymore. Command line tools like ping and host work normally. WTF?

Why are distributions continuing to allow systemd to extend its tentacles deeper and deeper into more parts of Linux userland with poorly tested subsystem replacements for parts of Linux that have been stable for decades? Does nobody else consider this repeating pattern of rewrite-replace-introduce-new-bugs a problem? Newer is not all that better if you break what is a pretty bog standard and common use-case.

As well, Firefox now defaults to DoH (DNS over HTTPS), which may be bypassing systemd-resolved by doing DNS resolution in the app instead of calling `gethostbyname()` (`man gethostbyname`) and/or `getaddrinfo()`.

`man systemd-resolved` describes why there is new DNS functionality: security; "caching and validating DNS/DNSSEC stub resolver, as well as an LLMR and MulticastDNS resolver and responder".

From `man systemd-resolved` https://man7.org/linux/man-pages/man8/systemd-resolved.servi... :

> To improve compatibility, /etc/resolv.conf is read in order to discover configured system DNS servers, but only if it is not a symlink to /run/systemd/resolve/stub-resolv.conf, /usr/lib/systemd/resolv.conf or /run/systemd/resolve/resolv.conf

> [...] Note that the selected mode of operation for this file is detected fully automatically, depending on whether /etc/resolv.conf is a symlink to /run/systemd/resolve/resolv.conf or lists 127.0.0.53 as DNS server.

Is /etc/resolv.conf read on reload and or restart of the systemd-resolved service (`servicectl restart systemd-named`)?

Some examples of validating DNSSEC in `man delv` would be helpful.

NetworkManager (now with systemd-resolved) is one system for doing DNS configuration for zero or more transient interfaces:

  man nmcli

  nmcli connection help
  nmcli c help
  nmcli c h

  nmcli c show ssid_or_nm_profile | grep -i dns

  nmcli c modify help

  man systemd-resolved
  man delv
  man dnssec-trust-anchors.d

[+]

> manually syncing the clock via ntp usually gets my dns working again.

Why is this necessary?

[+]
[-]

Ask HN: How bad is proof-of-work blockchain energy consumption?

I'm not a blockchain/crypto expert by any means, but I've been hearing about how much energy the proof-of-work blockchains (Bitcoin, Ethereum, NFTs) consume. Unless I'm mistaken their whole design relies on cranking through more and more CPU cycles. Should we be more concerned about this? Are the concerns overblown? Are there ways to improve it without certain crypto currencies imploding?

A rational market would be choosing an asset that offers value storage and transmission (between points in spacetime) according to criteria: "security" (security theater, infosec, cryptologic competency assessment, software assurances), "future stability" (future switching costs), and "cost".

The externalities of energy production are what must be overcome if we are to be able to withstand wasteful overconsumption of electricity. Eventually, we could all have free clean energy and no lightsabers, right?

So, we do need to minimize wasteful overconsumption. Define wasteful in terms of USD/kWHr (irregardless of industry)? In terms of behavioral economics, why are they behaving that way when there are alternatives that cost <$0.01/tx and a fairly-aggregated comprehensive n kWhr of electricity?

TIL about these guys, who are deciding to somewhat-responsibly self-regulate in the interest of long-term environmental sustainability for all of the land: "Crypto Climate Accord". https://cryptoclimate.org/

"Crypto Climate Accord Launches to Decarbonize Cryptocurrency Industry Brings together the likes of CoinShares, ConsenSys, Ripple, and the UNFCCC Climate Champions to lead sustainability in blockchain and crypto" (2021) https://bit.ly/CryptoClimateAccord

> What are the objectives of the Crypto Climate Accord? The Accord’s overall objective is to decarbonize the global crypto industry. There are three provisional objectives to be finalized in partnership with Accord supporters:

> - Enable all of the world’s blockchains to be powered by 100% renewables by the 2025 UNFCCC COP Conference

> - Develop an open-source accounting standard for measuring emissions from the cryptocurrency industry

> - Achieve net-zero emissions for the entire crypto industry, including all business operations beyond blockchains and retroactive emissions, by 2040

Similar to the Paris Agreement (2015), stakeholders appear to be setting their own targets for sustainability in accordance with the Crypto Climate Accord (2021). https://cryptoclimate.org/accord/

Someone who's not in renewables could launch e.g. a "Satoshi Nakamoto Clean Energy Fund: SNCEF" to receive donations from e.g. hash pools and connect nonprofits with sustainability managed renewables. How many SNCEFs did you give this year and why?

#CleanEnergy

[+]
[+]
[+]

More transistors per unit area, but also more efficient please! There should be demand for more efficient chips (semiconductors,) that are fully-utilized while depreciating on your ma's electricity bill (which is not yet (?) really determined by a market-based economy with intraday speculation to smooth over differences in supply and demand in the US). Oversupply of the electrical grid results in damage costs; which is why the price sometimes falls so low where there are intraday prices and supply has been over-subsidized pending the additional load from developing economies and EVs: Electric Vehicles.

New grid renewables (#CleanEnergy) are now less expensive than existing baseload; which makes renewables long term environment-rational and short term price-rational.

"Thermodynamics of Computation Wiki" (2018) https://news.ycombinator.com/item?id=18146854

> No, all space heaters are equally efficient. They all have perfect 100% efficiency, because they turn electrical power into heat. When your work product is heat and the waste product is also heat, then there really is no waste.

This heat must be distributed throughout the room somehow (i.e. a batteryless woodstove fan or a sterling engine that does work with the difference in entropy when there is a difference in entropy)

> Technically in the case of cryptocurrency mining, some of the electrical power is turned into information rather than heat. In principle this reduces the amount of heat that you get, but in practice this isn’t even measurable. Most of the information is erased (discarded as useless), which turns it back into heat.

See "Thermodynamics of Computation Wiki" re: a possible way to delete known observer-entangled bits while reducing heat/entropy (thus bypassing Landauer's limit for classical computation?)?

> Only a few hundred bits of information will be kept after successfully mining a block of transactions, and the amount of heat that costs you is fantastically small. Far smaller than you can measure.

Each n-symbol sequence in the hash function output does appear to have nearly equal frequency/probability of occurrence. Indeed, is Proof-of-Work worth the heat if you're not reusing the waste heat?

[-]

What does a PGP signature on a Git commit prove?

[+]
[+]
[+]

That nonce value could be ±\0 or 5,621,964,321e100; though for well-designed cryptographic hash functions it's far less likely that - at maximum difficulty - a low nonce value will result in a hash collision.

[+]

Searching for the value to prepend or append that causes a hash collision is exactly the same as finding a nonce value at maximum difficulty (not less than the difficulty value, exactly equal to the target hash).

Mutate and check.

[+]

Brute forcing to find `hash(data_1+nonce) == hash(data_0)` differs very little from ``hash(data_1+nonce) < difficulty_level`. Write each and compare the cost/fitness/survival functions.

If the hash function is reversible - as may be discovered through e.g. mutation and selection - that would help find hashes that are equal and maybe also less than.

Practically, there are "rainbow tables" for very many combinations of primes and stacked transforms: it's not necessary to search the whole space for simple collisions and may not be necessary for preimages; we don't know and it's just a matter of time. "Collision attack" https://en.wikipedia.org/wiki/Collision_attack

Crytographic nonce > hashing: https://en.wikipedia.org/wiki/Cryptographic_nonce#Hashing

[+]

Practically, iff browsers still relied upon SHA-1 to fingerprint and pin and verify certificates instead of the actual chain, and there were no file size limits on x.509 certificates, some fields in a cert (e.g. CommonName and SAN) would be chosen and other fields would then potentially be nonce.

In context to finding a valid cert with a known good hash fingerprint, how many prime keypairs could there be to precompute and cache/memoize when brute forcing.

"SHA-1 > Cryptanalysis and validation " does list chosen prefix collision as one of many weaknesses now identified in SHA-1: https://en.wikipedia.org/wiki/SHA-1#Cryptanalysis_and_valida...

This from 2008 re: the 200 PS3s it took to generate a rogue CA cert with a considered-valid MD5 hash: https://hackaday.com/2008/12/30/25c3-hackers-completely-brea...

... Was just discussing e.g. frankencerts the other day: https://news.ycombinator.com/item?id=26605647

[-]

Breakthrough for ‘massless’ energy storage

[+]
[+]

> You can't make a car by building the chassis out of smartphone batteries

They're called Structural batteries (or [micro]structural super/ultracapacitors)

"Carmakers want to ditch battery packs, use auto bodies for energy storage" (2020,) https://arstechnica.com/cars/2020/11/carmakers-want-to-ditch...

[+]

The Ars article I linked has an overview and some history and specific industry applications; whereas OT is about a new approach discovered since the Ars article was written.

[-]

OpenSSL Security Advisory

[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]

https://project-everest.github.io/ :

> Focusing on the HTTPS ecosystem, including components such as the TLS protocol and its underlying cryptographic algorithms, Project Everest began in 2016 aiming to build and deploy formally verified implementations of several of these components in the F* proof assistant.

> […] Code from HACL*, ValeCrypt and EverCrypt is deployed in several production systems, including Mozilla Firefox, Azure Confidential Consortium Framework, the Wireguard VPN, the upcoming Zinc crypto library for the Linux kernel, the MirageOS unikernel, the ElectionGuard electronic voting SDK, and in the Tezos and Concordium blockchains.

S2n is Amazon's formally verified TLS library. https://en.wikipedia.org/wiki/S2n

IDK about a formally proven PKIX. https://www.google.com/search?q=formally+verified+pkix lists a few things.

A formally verified stack for Certificate Transparency would be a good way to secure key distribution (and revocation); where we currently depend upon a TLS library (typically OpenSSL), GPG + HKP (HTTP Key Protocol).

Fuzzing on an actual hardware - with stochastic things that persist bits between points in spacetime - is a different thing.

[+]

Both a gap and an opportunity; someone like an agency or a FAANG with a budget for something like this might do well to - invest in the formal methods talent pipeline and - very technically interface with e.g. Everest about PKIX as a core component in need of formal methods.

"The SSL landscape: a thorough analysis of the X.509 PKI using active and passive measurements" (2011) ... "Analysis of the HTTPS certificate ecosystem" (2013) https://scholar.google.com/scholar?oi=bibs&hl=en&cites=16545...

TIL about "Frankencerts": Using Frankencerts for Automated Adversarial Testing of Certificate Validation in SSL/TLS Implementations (2014) https://scholar.google.com/scholar?cites=3525044230307445257... :

> Our first ingredient is "frankencerts," synthetic certificates that are randomly mutated from parts of real certificates and thus include unusual combinations of extensions and constraints. Our second ingredient is differential testing: if one SSL/TLS implementation accepts a certificate while another rejects the same certificate, we use the discrepancy as an oracle for finding flaws in individual implementations.

> Differential testing with frankencerts uncovered 208 discrepancies between popular SSL/TLS implementations such as OpenSSL, NSS, CyaSSL, GnuTLS, PolarSSL, MatrixSSL, etc.

W3C ld-signatures / Linked Data Proofs, and MerkleProof2017: https://w3c-ccg.github.io/lds-merkleproof2017/

"Linked Data Cryptographic Suite Registry" https://w3c-ccg.github.io/ld-cryptosuite-registry/

ld-proofs: https://w3c-ccg.github.io/ld-proofs/

W3C DID: Decentralized Identifiers don't solve for all of PKIX (x.509)?

"W3C DID x.509" https://www.google.com/search?q=w3c+did+x509

[+]
[-]

How much total throughput can your wi-fi router really provide?

[+]

netperf and iperf are utilities for measuring network throughput: https://en.wikipedia.org/wiki/Iperf

It's possible to approximate the https://dslreports.com/speedtest using the flent CLI or QT GUI (which calls e.g. fping and netperf) and isolate out ISP variance by running a netperf server on a decent router and/or a workstation with a sufficient NIC (at least 1Gbps). https://flent.org/tests.html

`dslreports_8dn`: https://github.com/tohojo/flent/blob/master/flent/tests/dslr...

From https://flent.org/ :

> RRUL: Create the standard graphic image used by the Bufferbloat project to show the down/upload speeds plus latency in three separate charts:

> `flent rrul -p all_scaled -l 60 -H address-of-netserver -t text-to-be-included-in-plot -o filename.png`

In 2021, most routers - even with OpenWRT and hardware-offloading - cannot actually push 1 Gigabit over wired Ethernet, though the port spec does say 1000 Mbps.

[-]

The Most Important Scarce Resource Is Legitimacy

ve55 | 2021-03-23 17:28:53 | 119 | # | ^
[+]
[+]

Public goods ... Welfare economics ... Social choice theory, Arrow's, Indifference curve: https://en.wikipedia.org/wiki/Indifference_curve

People do collectibles; commemorative plates.

[-]

A few notes on message passing

[+]
[+]

> Luckily, global orders are rarely needed and are easy to impose yourself (outside distributed cases): just let all involved parties synchronize with a common process.

When there are multiple agents/actors in a distributed system, and the timestamp resolution is datetime64, and clock synchronization and network latency are variable, and non-centralized resilience is necessary to eliminate single points of failure, global ordering is impractical to impossible because there is no natural unique key with which to impose a [partial] preorder [1][2]: there are key collisions when you try and merge the streams.

Just don't cross the streams.

[1] https://en.wikipedia.org/wiki/Preorder_(disambiguation)

[2] https://en.wikipedia.org/wiki/Partially_ordered_set

The C in CAP theorem is for Consistency [3][4]. Sequential consistency is elusive because something probably has to block/lock somewhere unless you've optimally distributed the components of the CFG control flow graph.

[3] https://en.wikipedia.org/wiki/Consistency_model

[4] https://en.wikipedia.org/wiki/CAP_theorem

FWIU, TLA+ can help find such issues. [5]

[5] https://en.wikipedia.org/wiki/TLA%2B

[+]
[+]
[+]

The Lamport timestamp: https://en.wikipedia.org/wiki/Lamport_timestamp :

> The Lamport timestamp algorithm is a simple logical clock algorithm used to determine the order of events in a distributed computer system. As different nodes or processes will typically not be perfectly synchronized, this algorithm is used to provide a partial ordering of events with minimal overhead, and conceptually provide a starting point for the more advanced vector clock method.

[-]

Duolingo's language notes all on one page

Succinct. What a useful reference.

An IPA (International Phonetic Alphabet) reference would be helpful, too. After taking linguistics in college, I found these Sozo videos of US english IPA consonants and vowels that simultaneously show {the ipa symbol, example words, someone visually and auditorily producing the phoneme from 2 angles, and the spectrogram of the waveform} but a few or a configurable number of [spaced] repetitions would be helpful: https://youtu.be/Sw36F_UcIn8

IDK how cartoonish or 3d of an "articulatory phonetic" model would reach the widest audience. https://en.wikipedia.org/wiki/Articulatory_phonetics

IPA chart: https://en.wikipedia.org/wiki/International_Phonetic_Alphabe...

IPA chart with audio: https://en.wikipedia.org/wiki/IPA_vowel_chart_with_audio

All of the IPA consonant chart played as a video: "International Phonetic Alphabet Consonant sounds (Pulmonic)- From Wikipedia.org" https://youtu.be/yFAITaBr6Tw

I'll have to find the link of the site where they playback youtube videos with multiple languages' subtitles highlighted side-by-side along with the video.

Found it: https://www.captionpop.com/

It looks like there are a few browser extensions for displaying multiple subtitles as well; e.g. "YouTube Dual Subtitles", "Two Captions for YouTube and Netflix"

[-]

Ask HN: The easiest programming language for teaching programming to young kids?

Hi,

I want to start a small community pilot project to help young kids, 8 and above, get interested in programming. We will use video games and robotics projects. We want to keep our tech stack as simple as possible. Here are some of the choices:

Godot + Aurdino: We can use C in Godot and Aurdino. Aurdino might be more interesting for kids as opposed neatly packaged Lego Kits.

Apple SpriteKit + Lego Mindstorm: We can use Swift with Legos. But cost will be higher.

Some of the projects we are thinking are:

Game-ish:

1. Sound visualizer like how Winamp and old school visualization were. Use speakers. And various other ideas around these concepts.

2. AR project that shows the world around you in cartoonish style. Swap faces etc.

3. Of cousre, platform games.

Robotics projects:

I see a lot of tutorials for Arduino such as robots that follow sound or light, or stuff like lights display. We will use mostly those.

Some harder project ideas I have are for drones, boats, and other navigational vehicles. This is why I want to use Arduino. But is C going to be too hard for young kids to play with?

What do you recommend? If this works, I would like to expand it and start a company around it.

awesome-python-in-education > "Python suitability for education" lists a few justifications for Python: https://github.com/quobit/awesome-python-in-education#python...

There is a Scratch Jr for Android and iOS. You can view Scratch code as JS. JS does run in a browser, until it needs WASI.

awesome-robotics-libraries: https://github.com/jslee02/awesome-robotics-libraries

FWIU, ROS (Robot Operating System) is now installable with Conda/Mamba. There's a jupyter-ros and a jupyterlab-ros extension: https://github.com/RoboStack/jupyter-ros

I just found this: https://coderdojotc.readthedocs.io/projects/python-minecraft...

> This documentation supports the CoderDojo Twin Cities’ Build worlds in Minecraft with Python code group. This group intends to teach you how to use Python, a general purpose programming language, to mod the popular game called Minecraft. It is targeted at students aged 10 to 17 who have some programming experience in another language. For example, in Scratch.

K12CS Framework has your high-level CS curriculum: https://k12cs.org/ [PDF]: https://k12cs.org/wp-content/uploads/2016/09/K%E2%80%9312-Co...

Educational technology > See also links to e.g. "Evidence-based education" and "Instructional theory" https://en.wikipedia.org/wiki/Educational_technology https://en.wikipedia.org/wiki/Educational_technology

[+]

Yw. Np. So I just searched for "site: readthedocs.io kids python" https://www.google.com/search?q=site%3Areadthedocs.io+kids+p... and found a few new and old things:

SensorCraft (pyglet (Python + OpenGL)) from US AFRL Sensors Directorate has e.g. Gravity, Rocket Launch, and AI tutorials:

> Most people are familiar with Minecraft [...] for this project we are using a Minecraft type environment created in the Python programming language. The Air Force Research Laboratory (AFRL) Sensors Directorate located in Dayton, Ohio created this guide to inspire kids of all ages to learn to program and at the same time get an idea of what it is like to be a Scientist or Engineer for the Air Force. We created this YouTube video about SensorCraft

https://sensorcraft.readthedocs.io/en/latest/intro.html

`conda install -c conda-forge -y pyglet` should probably work. Miniforge on Win/Mac/Lin is an easy way to get Python installed on anything including ARM64 for a RPi or similar; `conda create -n scraft; conda install -c conda-forge -y python=3.8 jupyterlab jupytext jupyter-book pyglet` . If you're in a conda env, `pip install` should install things within that conda env. Here's the meta.yaml in the conda-forge pyglet-feedstock: https://github.com/conda-forge/pyglet-feedstock/blob/master/...

"BBC micro:bit MicroPython documentation" https://microbit-micropython.readthedocs.io/en/latest/

$25 for a single board-computer with a battery pack and a case (and curricula) is very reasonable: https://en.wikipedia.org/wiki/Micro_Bit

> The [micro:bit] is described as half the size of a credit card[10] and has an ARM Cortex-M0 processor, accelerometer and magnetometer sensors, Bluetooth and USB connectivity, a display consisting of 25 LEDs, two programmable buttons, and can be powered by either USB or an external battery pack.[2] The device inputs and outputs are through five ring connectors that form part of a larger 25-pin edge connector. (V2 adds a Mic and a Speaker)

[-]

Raspberry Pi for Kill Mosquitoes by Laser

[+]
[+]
[+]
[+]

Yeah, they already did sharks with lasers. IDK what the licensing terms are on that

[+]
[-]

Donate Unrestricted

[+]

Unbelievable.

Rather than diminishing the efforts of others, you could start helping by describing your own efforts to improve education (in order to qualify your ability to assess the mentioned and other efforts to improve education and learning)

In context to seed and series funding for a seat on a board of a for-profit venture, an NGO non-profit organization can choose whether to accept restricted donations and government organizations have elected public servant leaders who lead and find funding.

Works based on Faust: https://en.wikipedia.org/wiki/Works_based_on_Faust

[-]

Bitcoin Is Time

[+]
[+]

"Bitcoin scalability problem" could link to the Ethereum design docs: https://en.wikipedia.org/wiki/Bitcoin_scalability_problem

The Ethereum design docs could link to direct-listed premined [stable] coins as a solution for Proof of Work and TPS reports: https://github.com/flare-eng/coston#smart-contracts-with-xrp

(edit) re: n-layer solutions: The https://interledger.org/ RFCs and something like Transaction Permission Layer (TPL) will probably be helpful for interchain compliance.

> Interledger is not tied to a single company, blockchain, or currency.

From https://tplprotocol.org/ :

> The challenge: Current blockchain-based protocols lack an effective governance mechanism that ensures token transfers comply with requirements set by the project that issued the token.

> Projects need to set requirements for a variety of reasons. For instance, remaining compliant with securities laws, limiting transfer to beta testers, or limiting transfer to a particular geo-spatial location. Whatever your reason, if a requirement can be verified by a third-party, TPL will be able to help.

In the US, S-Corps can't have international or more than n shareholders, for example; so if firms even wanted to issue securities on a first-layer network, they'd need an extra-chain compliance mechanism to ensure that their issuance is legal pursuant to local, sovereign, necessary policies. Re-issuing stock certificates is something that has to be done sometimes. When is it possible to cancel outstanding tokens?

[-]

Foundational Distributed Systems Papers

From "Ask HN: Learning about distributed systems?" https://news.ycombinator.com/item?id=23932271 :

> Papers-we-love > Distributed Systems: https://github.com/papers-we-love/papers-we-love/tree/master...

> awesome-distributed-systems also has many links to theory: https://github.com/theanalyst/awesome-distributed-systems

And links to more lists of distributed systems papers under "Meta Lists": https://github.com/theanalyst/awesome-distributed-systems#me...

In reviewing this awesome list, today I learned about this playlist: "MIT 6.824 Distributed Systems (Spring 2020)" https://youtube.com/playlist?list=PLrw6a1wE39_tb2fErI4-WkMbs...

> awesome-bigdata lists a number of tools: https://github.com/onurakpolat/awesome-bigdata

[-]

Low-Cost Multi-touch Whiteboard using the Wiimote (2007) [video]

"Interactive whiteboard" / "smart board" https://en.wikipedia.org/wiki/Interactive_whiteboard

Wii Remote > Features > Sensing: https://en.wikipedia.org/wiki/Wii_Remote#Sensing

.. > Third-Party Development describes a number of applications for IR/optical tracking with an array of nonstationary emitters: https://en.wikipedia.org/wiki/Wii_Remote#Third-party_develop...

Augmented Reality (AR) > Technology > Tracking: https://en.wikipedia.org/wiki/Augmented_reality#Tracking

... links to "VR positional tracking" which does have headings for "Optical" and "Sensor fusion": https://en.wikipedia.org/wiki/VR_positional_tracking

[-]

How to Efficiently Choose the Right Database for Your Applications

[+]

> You can achieve exactly the same thing with PostgreSQL tables with two columns (key JSONB PRIMARY KEY, value JSONB), including indices on subfields. With way more other functionality and support options.

PostgreSQL docs > "JSON Functions and Operators" https://www.postgresql.org/docs/current/functions-json.html

MongoDB can do jsonSchema:

> Document Validator¶ You can use $jsonSchema in a document validator to enforce the specified schema on insert and update operations:

   db.createCollection( <collection>, { validator: { $jsonSchema: <schema> } } )
   db.runCommand( { collMod: <collection>, validator:{ $jsonSchema: <schema> } } )
https://docs.mongodb.com/manual/reference/operator/query/jso...

Looks like there are at least 2 ways to handle JSONschema with Postgres: https://stackoverflow.com/questions/22228525/json-schema-val... ; neither of which are written in e.g. Rust or Go.

Is there a good way to handle JSON-LD (JSON Linked Data) with Postgres yet?

There are probably 10 comparisons of triple stores with rule inference slash reasoning on data ingress and/or egress.

[-]

A Data Pipeline Is a Materialized View

[+]

Like a Linked Data thesaurus with typed, reified edges between nodes/concepts/class_instances?

Here's the WordNet RDF Linked Data for "jargon"; like the "Jargon File": http://wordnet-rdf.princeton.edu/lemma/jargon

A Semantic MediaWiki Thesaurus? https://en.wikipedia.org/wiki/Semantic_MediaWiki :

> Semantic MediaWiki (SMW) is an extension to MediaWiki that allows for annotating semantic data within wiki pages, thus turning a wiki that incorporates the extension into a semantic wiki. Data that has been encoded can be used in semantic searches, used for aggregation of pages, displayed in formats like maps, calendars and graphs, and exported to the outside world via formats like RDF and CSV.

Google Books NGram viewer has "word phrase" term occurrence data by year, from books: https://books.google.com/ngrams

[-]

There’s no such thing as “a startup within a big company”

[+]
[+]
[+]
[+]

Living and working elsewhere with the wages of the region reduces expenses and opportunities; but the wealth of educational resources online [1][2] does make it feasible to even bootstrap a company on the side. Do you need to borrow money to scale quickly enough to pay expenses with sufficient cash flow for the foreseeable future?

Income sources: Passive income, Content, Equity that's potentially worth nothing, a backtested diversified portfolio (Golden Butterfly or All Weather Portfolio and why?) of sustainable investments, Business models [3]; Software implementations of solutions to businesses, organizations, and/or consumers' opportunities

Single-payer / Universal Healthcare is a looming family expense for many entrepreneurs; many of whom do get into entrepreneurship later in life.

Small businesses make up a significant portion of GDP. Small businesses have to have to accept risk.

There's still opportunity in the world.

[1] Startup School > Curriculum https://www.startupschool.org/curriculum

[2] https://www.ycombinator.com/library

[3] "Business models based on the compiled list at [HN]" https://gist.github.com/ndarville/4295324

From "Why companies lose their best innovators (2019)" https://news.ycombinator.com/item?id=23887903 :

> "Intrapreneurial." What does that even mean? The employee, within their specialized department, spends resources (time, money, equipment) on something that their superior managers have not allocated funding for because they want: (a) recognition; (b) job security; (c) to save resources such as time and money; (d) to work on something else instead of this wasteful process; (e) more money.

[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]

Ask HN: Keyrings: per-package/repo; commit, merge, and release keyrings?

Are there existing specs for specifying per-package release keyrings and per-repo commit and merge keyrings?

Keyring: a collection of keys imported into a datastore with review.

DevOpsSec; Software Supply Chain Security

Packages {X, Y, Z} in Indexes {A, B, C} are artifacts that are output from Builds (on workstations or servers with security policies) which build a build script (which is often deliberately not specified with a complete programming language in order to minimize build complexity; instead preferring YAML) which should be drawn from a stable commit hash in a Repository (which may be a copy of technically zero or more branches of a Repository hosted centrally next to Issues and Build logs and Build artifact Signing Keys).

Maxmimally, are there potentially more keyrings (or key authorization mappings between key and permission) than (1) commit; (2) merge; and (3) release?

Source Projects: Commit, Merge, [Run Build, Login to post-build env], Release (and Sign) package

Downstream Distros: Commit, Merge, [Run Build, Login to post-build env], Release (and Sign) package for the {testing, stable, security} (Signed) Index catalogs

[-]

Threat Actors Now Target Docker via Container Escape Features

[+]

Docker engine docs > "Protect the Docker daemon socket" https://docs.docker.com/engine/security/protect-access/

dev-sec/cis-docker-benchmark /controls: https://github.com/dev-sec/cis-docker-benchmark/tree/master/...

[+]

django-ca is one way to manage a PKI including ACMEv2, OCSP, and a CRL (Certificate Revocation) list: https://github.com/mathiasertl/django-ca

"How can I verify client certificates against a CRL in Golang?" mentions a bit about crypto/tls and one position on CRLs: https://stackoverflow.com/questions/37058322/how-can-i-verif...

CT (Certificate Transparency) is another approach to validating certs wherein x.509 cert logs are written to a consistent, available blockchain (or in e.g. google/trillian, a centralized db where one party has root and backup responsibilities also with Merkle hashes for verifying data integrity). https://certificate.transparency.dev/ https://github.com/google/trillian

Does docker ever make the docker socket available over the network, over an un-firewalled port by default? Docker Swarm is one config where the docker socket is configured to be available over TLS.

Docker Swarm docs > "Manage swarm security with public key infrastructure (PKI)" https://docs.docker.com/engine/swarm/how-swarm-mode-works/pk... :

> Run `docker swarm ca --rotate` to generate a new CA certificate and key. If you prefer, you can pass the --ca-cert and --external-ca flags to specify the root certificate and to use a root CA external to the swarm. Alternately, you can pass the --ca-cert and --ca-key flags to specify the exact certificate and key you would like the swarm to use.

Docker ("moby") and podman v3 socket security could be improved:

> From "ENH,SEC: Create additional sockets with limited permissions" https://github.com/moby/moby/issues/38879 ::

> > An example use case: securing the Traefik docker driver:

> > - "Docker integration: Exposing Docker socket to Traefik container is a serious security risk" https://github.com/traefik/traefik/issues/4174#issuecomment-...

> > > It seems it only require (read) operations : ServerVersion, ContainerList, ContainerInspect, ServiceList, NetworkList, TaskList & Events.

> > - https://github.com/liquidat/ansible-role-traefik

> > > This role does exactly that: it launches two containers, a traefik one and another to securely provide limited access to the docker socket. It also provides the necessary configuration.

> > - ["What could docker do to make it easier to do this correctly?"] https://github.com/Tecnativa/docker-socket-proxy/issues/13

> > - [docker-socket-proxy] Creates a HAproxy container that proxies limited access to the [docker] socket

[+]
[+]
[+]
[+]
[+]
[+]
[+]

podman v3 has a docker-compose compatible socket. From https://news.ycombinator.com/item?id=26107022 :

> "Using Podman and Docker Compose" https://podman.io/blogs/2021/01/11/podman-compose.html

[-]

Ask HN: What security is in place for bank-to-bank EFT?

When I set up an ETF on a bank's website, all I need to enter is the other bank's routing number and account number, which can be readily found on a paper check. Then you can transfer money from one bank to another... What security and authentication is in place to prevent fraud? In case of fraud, is the victim guaranteed to get the money back?

AFAIU, no existing banking transaction systems require the receiver to confirm in order to receive a funds transfer.

You can create a "multisig" DLT smart contract that requires multiple parties' signatures before the [optionally escrowed] funds are actually transferred.

EFT: Electronic Funds Transfer: https://en.wikipedia.org/wiki/Electronic_funds_transfer

As far as permissions to write to the account ledger: Check signatures are scanned. Cryptoasset keys are very long, high-entropy "passwords". US debit cards are chip+pin; it's not enough to just copy down the card number (and CVV code).

Though credit cards typically are covered by fraud protection, debit card transactions typically aren't: hopefully something will be recovered, but AFAIU debit txs might as well be as unreversible as cryptoasset transactions.

TPL: Transaction Permission Layer is one proposed system for permissions in blockchain; so that e.g. {proof of residence, receiver confirmation, accredited investor status, etc.} can be necessary for a transaction to go through.

ILP: Interledger Protocol > RFC 32 > "Peering, Clearing and Settling" describes how ~EFT with Interledger works: https://interledger.org/rfcs/0032-peering-clearing-settlemen...

[-]

Podman: A Daemonless Container Engine

Is the title of this page out of date?

AFAIU, Podman v3 has a docker-compose compatible socket and there's a daemon; so "Daemonless Container Engine" is no longer accurate.

"Using Podman and Docker Compose" https://podman.io/blogs/2021/01/11/podman-compose.html

[+]
[+]
[+]

Podman v3 is compatible with docker-compose (but not yet swarm mode, FWIU), has a socket and a daemon that services it.

Buildah (`podman buildx`, `buildah bud --arch arm64`) just gained multiarch build support; so also building arm64 containers from the same Dockerfile is easy now. https://github.com/containers/buildah/issues/1590

IDK what BuildKit features should be added to Buildah, too?

[+]
[-]

Cambridge Bitcoin Electricity Consumption Index

[+]
[+]
[+]
[+]

Cryptoasset mining creates demand for custom chip fab (how different are mining rigs from SSL/TLS accelerator expansion cards), which is definitely not zero sum: more revenue = more opportunities.

https://en.wikipedia.org/wiki/Price_elasticity_of_supply

With insufficient demand, a market does not develop into a sustainable market. "Rule of three (economics)" says that markets are stable with 3 major competitors and many smaller competitors; nonlinearity and game theory.

https://en.wikipedia.org/wiki/Rule_of_three_(economics)

We've always had custom chip fab, but the prices used to be much higher. Proof of Work (and Proof of Research) incentivize microchip and software energy efficiency; whereas we had observed and been most concerned with doublings in transistor density.

FWIU, it's now more sustainable and profitable to mine rare earth elements from recycled electronics than actually digging real value out of the earth?

Compared to creating real value by digging for gold, how do we value financial services?

[-]

Bitcoin's fundamental value is negative given its environmental impact

[+]

> If the price of energy is calculated with corresponding carbon tax included, shouldn't Bitcoin be neutral?

Yes, but there are vastly more energy efficient substitute DLTs with near-zero switching costs. Litecoin and scrypt (instead of AES256), for example.

Apply a USD/kWhr threshold across all industries.

Is this change (and focus on the external costs of energy production) more the result of penalties or incentives?

Pre-mined coins are vastly more energy efficient (with tx costs <1¢ and similarly minimal kWhr/tx costs), but the market doesn't trust undefined escrow terms that are fair game in commodities and retail markets.

We have trouble otherwise storing energy from noon to commute and dinner time; whereas a commodity like grain may keep for quite awhile.

Bitcoin serves as a demand subsidy when heavily-subsidized energy prices crash due to oversupply (that we should recognize as temporary because we are moving to electric vehicles and we need to reach production volumes so that, in comparison to alternatives, renewables are now more cost effective)

In the US, we have neither carbon taxes nor intraday prices. The EU has carbon taxes and electrical energy markets.

[+]
[-]

Ask HN: What are some books where the reader learns by building projects?

2021 Edition. This is a continuation of the previous two threads which can be found here:

https://news.ycombinator.com/item?id=22299180

https://news.ycombinator.com/item?id=13660086

Other resources:

https://github.com/danistefanovic/build-your-own-x

https://github.com/AlgoryL/Projects-from-Scratch

https://github.com/tuvtran/project-based-learning

"Agile Web Development with Rails [6]" (2020) teaches TDD and agile in conjunction with a DRY, CoC, RAD web application framework: https://g.co/kgs/GNqnWV

[-]

Is it wrong to demand features in open-source projects?

Yes, it's wrong to demand something for nothing: that's entitlement, not business (which involves some sort of equitable exchange of goods and/or services).

Better questions: How do I file a BUG report issue, create a feature ENHancement request issue, send a pull request with a typo fix, write DOCs and send a PR, write test cases for a bug report?

How can I sponsor development of a feature?

A project may define a `.github/FUNDING.yml`, which GitHub will display on the 'Sponsor' tab of the GitHub project. A project may also or instead include funding information in their /README.md.

How do I ask IRC or a mailing list or issues how much and how long it would cost to develop a feature, if somebody had some international stablecoin and a limited term agreement?

The answer may be something like, "thanks for the detailed use case or user story, that's on our roadmap, there are major issues blocking similar features and that's where the expense would be."

[-]

Turning desalination waste into a useful resource

[+]

Is it possible to capture the natural gas leaking from oil wells like we already capture flue gas? Would that be economical?

[-]

Evcxr: A Rust REPL and Jupyter Kernel

[+]

Here's the xeus-cling (Jupyter C++ Kernel) source: https://github.com/jupyter-xeus/xeus-cling/tree/master/src

Do any of the other non-Python Jupyter kernels have examples of working fancy UI components? https://github.com/jupyter/jupyter/wiki/Jupyter-kernels

Jupyter kernels implement the Jupyter kernel message spec. Introspection, Completion: https://jupyter-client.readthedocs.io/en/latest/messaging.ht...

Debugging (w/ DAP: Debug Adapter Protocol) https://jupyter-client.readthedocs.io/en/latest/messaging.ht...

A `display_data` Jupyter kernel message includes a `data` key with a dict value: "The data dict contains key/value pairs, where the keys are MIME types and the values are the raw data of the representation in that format." https://jupyter-client.readthedocs.io/en/latest/messaging.ht...

This looks like it does something with MIME bundles: https://github.com/jupyter-xeus/xeus-cling/blob/00b1fa69d17b...

ipython.display: https://github.com/ipython/ipython/blob/master/IPython/displ...

ipython.core.display: https://github.com/ipython/ipython/blob/master/IPython/core/...

ipython.lib.display: https://github.com/ipython/ipython/blob/master/IPython/lib/d...

You can also run Jupyter kernels in a shell with jupyter/jupyter_console:

    pip install jupyter-console jupyter-client
    jupyter kernelspec list
    jupyter console --kernel python3

[-]

Ask HN: What is the cost to launch a SaaS business MVP

I interviewed several entrepreneurs, I noticed several spent so much money on developing their product before launch. What is your experience?

When you're not yet paying yourself, your costs are your living costs and opportunity costs (in addition to the given fixed and variable dev and prod deployment cloud costs).

Early feedback from actual customers on an MVP can save lots of development time. GitLab Service Desk is one way to handle emails as issues from users who don't have GitLab accounts.

A beta invite program / mailing list signup page costs very little to set up; you can start building your funnel while you're developing the product.

[-]

Cryptocurreny crime is way ahead of regulators and law enforcement

[+]
[+]
[+]

Bitcoin was created in context to "Transparency and Accountability": a campaign motto not coincidentally found in the title of the "Federal Funding Accountability and Transparency Act of 2006".

> The Federal Funding Accountability and Transparency Act of 2006 (S. 2590)[2] is an Act of Congress that requires the full disclosure to the public of all entities or organizations receiving federal funds beginning in fiscal year (FY) 2007. The website USAspending.gov opened in December 2007 as a result of the act

Sen. Obama's office is the origin of this bill; which was fronted by Sens Coburn and McCain, who had the clout.

https://usaspending.gov/ creates a mandatory database with budgetary line item metadata. Where money actually goes is something that is far more transparent and accountable with bitcoin and other public ledgers than any existing ledger covered by bank secrecy laws.

For context, in 2008-09, global financial systems were failing as a result of the American economy: housing bubble burst, HFT "flash crash" that we didn't have CAT or big data tools to determine the cause of, DDOS attacks and cyber security losses increasing YoY, credit default swaps had been rated as AAA securities (they sold bundled bad debt like it was worth something, and then wrote down losses), Enron energy speculation amidst rolling blackouts that were leaving hospitals in the dark, on gas generators, government investments in renewables had been paltry since the Carter administration had put solar panels on the roof of the White House before the whole oil price shock, and oil commodity speculation had driven the price of oil to like 2-4x the 2000 price (with resultant price effects on most CPI inflation/PPP basket goods); but electricity consumption was down in 2008 and renewables hadn't reached production volumes necessary to reach the competitive price point that renewables now present: cheaper than nonrenewables.

Who would have thought that the speculative price would continue to exceed the production cost. incentives or penalties?

Externalities per dollar returned per kWh is one way to assess the total costs of electricity production methods.

"Buy Gold" was the refrain of the day: TV commercials, signs out in front of piano stores (a somewhat-arbitrary commodity, sales of which are observed to be a leading indicator of economic health), signs on the road. And the message was "take your money out the market and put it in gold" which drives up the prices for chips and boards and medical equipment that rely upon that commodity as a material input. Gold is necessary for tropical spec components in high-humidity environments: gold hinges are prized, for example.

But, "look, there's water flowing from the chocolate fountain; so you can go ahead and go" and "you know you want to put it back in there, in that market" we're the appropriate messages given our revenue at the time.

For further technology scene context in 2007-2009,

"Grid computing" links to a number of distributed computing projects: https://en.wikipedia.org/wiki/Grid_computing#History

IIRC, there was a production metric-priced grid system developed around Seattle/Vancouver called "Gold" (?) that was built on Xen and is likely a precursor to metric-priced Cloud services like EC2 and S3 (which now simplify calculations for how much a 51% attack against a Proof of Work txpool with adaptive tx fees costs with n good participants in the game) which incentivize efficiency by penalizing expensive operations.

Code bloat was already a thing: how is everything getting slower when Moore's Law predicts the growth rate in transistor density? Are there sufficient incentives for code efficiency when there seem to be surplus compute resources just idly depreciating.

MySQL primary/secondary replication was considered a viable distributed database system, but securing replication depends upon cert exchange and (optionally), PKI, DNS, and IP tunnels of some sort. And then who has root, write to the journal and tables and indexes on the filesystem, UPDATE, and DELETE access in an inter-organizational distributed systems architecture with XML, Web Services, our very own ESB to scale separately from the database replication and off-site backups that nobody ever checks against the online data, and fragmented and varyingly-implemented industry standards that hopefully specify at least a sufficient key for the record that's unique across ledgers/systems/databases.

BitTorrent DHT magnet: links were extant.

Linden Dollars in Linden Lab's Second Life (there's a price floor on land, which is necessary to sell digital assets/goods/products/services) and accumulated avatar value in e.g. EverQuest and WarCraft (for which there were secondhand markets).

ACH was ACH: GPG-signed files over SFTP on the honor of the audited bank to not allow transfers that deposit money that doesn't exist.

There was no common struct for banking APIs (as apparently only e.g. Plaid, Quicken, and Mint solve for): ledger transactions have a fixed width text field that may contain multiple fields concatenated into one string, and there's no "payee URI" column in the QIF or CSV dumps of an account ledger.

To request more than e.g. the past 90 days of one's own checking account ledger, one was expected to parse tables out of per-month PDFs with e.g. PDFminer at $20 apiece, and then think up ones own natural key in order to merge and lookup records because (2008-01-01,3.99) and (2008-01-01,3.99,storename) are indistinct as a natural key (and when hashed). If you loan a your bank money (for them to now freely invest in the other side, since GLBA in 1999), wouldn't you think that the least they can do is give you `SELECT * WHERE account_id=?` as a free CSV without any datetime limitation in regards to what's offline and what's online.

"Audit the Fed", "Audit DoD" were being chanted by economically-aware citizens amidst severe correction and what was then the most severe recession since the Great Depression: the "Great Recession" it was called, and payouts to essential cronies (who hadn't saved wheat for the famine) were essential.

Overdraft was an error charged to the customer, who didn't build an inconsistent system (CAP theorem) that allows spending money that doesn't exist (at interest charged to the consumer/taxpayer).

"Catch Me If You Can" (2002) described the controls for bank fraud at the time. Why are fees so high?

"Office Space" (1999) described penny-shabing / salami-slicing attack: "fractions of a penny".

"Beverly Hills Ninja" (1997) detailed the story of the Great White Ninja and Tanley! (fistpalm)

"Swordfish" (1999) described a domestic disaster and bank transfers confirmable in seconds.

[-]

Ask HN: Why aren't micropayments a thing?

Amazon aws and related services can charge you a rate per email, or per unit time of computation, so why can't news sites just charge you $0.01 to read an article, or even half that?

[+]

https://webmonetization.org/ lists Coil (flat $5/mo) as the first Web Monetization provider: https://coil.com/

Web Monetization builds upon ILP (Interledger Protocol), which is designed to work with any type of ledger; though it's probably not possible for any traditional ledger to beat the <1¢ transaction fee that only pre-mined coins have been able to achieve.

[+]
[-]

Elon Musk announces $100M carbon capture prize

https://www.xprize.org/prizes/carbon :

> The $20M NRG COSIA Carbon XPRIZE develops breakthrough technologies to convert CO₂ emissions into usable products.

CCS: https://en.wikipedia.org/wiki/Carbon_capture_and_storage

CCU: https://en.wikipedia.org/wiki/Carbon_capture_and_utilization

Sequestration: https://en.wikipedia.org/wiki/Carbon_sequestration

Hemp!? Is hemp the best answer? Thousands of products and Oxygen can be made from Carbon diOxide, Hemp, water, UV radiation, and soul.

[-]

Tim Berners-Lee wants to put people in control of their personal data

[+]
[+]
[+]

While I recognize the value of W3C LDP and SOLID, I also fail to see anything in SOLID that prevents B from sharing A's now pod-siloed information.

Does it prevent screenshots and OCR?

So it's in standard record structs and that makes it harder for the bad guys?

Who moderates mean memes with my face on them?

It is my hope that future Linked Data spec tutorials model something benign like shapes or cells instead of people: so that we can still see the value.

[+]

No, there are few to no actual privacy improvements over centralized systems.

Perhaps even functional regression: what, are you going to run a hash blocklist across all nodes? Like spamhaus? Is there logging or user accounting? Is anything chain of custody admissable, or are we actually talking about privacy and liberty here?

Is everything just marked, "not for unlimited distribution"? And we dwpend upon there not being bad actors?

Real costs are very different with just friendly early adopters.

Cryptographically signing posts (with LD-Signatures) may help with integrity, but that can be done with centralized systems and does nothing to help with confidentiality.

What about availability? Is it a trivially-DOS'able system?

[-]

Governments spurred the rise of solar power

[+]
[+]

Should we prefer penalties or incentives in order to use predictable markets for the change we need?

[+]

I want to minimize the external environmental costs of electricity production and distribution.

Given that the market has selected the least energy-efficient cryptoasset, we should not expect markets to just change given the existing incentives.

> I'd love to see a study where researchers take things that are seen as good/important/essential to modern life and measure the amount of public/government sponsorship that helped bring it about.

Essential technology investment of US tax dollars?

NASA spinoff technologies: https://en.wikipedia.org/wiki/NASA_spinoff_technologies

NSF, DARPA, IARPA, In-Q-Tel, ARPA-e (2009)

List of emerging technologies: https://en.wikipedia.org/wiki/List_of_emerging_technologies

[-]

Termux no longer updated on Google Play

[+]

Note that there are 297 hidden items in that issue so you have to click "Load more..." ceil(297/60) times to read all of the comments about how APK packaging is soon necessary for latest Android devices so the termux package manager can't just dump executable binaries wherever.

FWIU:

- Android Q+ disallows exec() on anything in $HOME, which is where termux installed binaries that may have been writeable by the executing user.

- Binaries installed from APKs can be exec()'d, so termux must keep APK repacks rebuilt and uploaded to a play store.

- Termux shouldn't be installed from Google Play anymore: you should install termux from the F-Droid APK package repos, and it will install APKs instead of what it was doing.

- Compiling to WASM with e.g. emscripten or WASI was one considered alternative. "Emscripten & WASI & POSIX" https://github.com/emscripten-core/emscripten/issues/9479

[+]

> > offer users the option of generating an apk wrapping their native code in a usable way.

> This seems a promising solution: compile from source, create an apk, install - your custom distribution! For popular collections of packages, a pre-built apk.

FPM could probably generate APKs in addition to the source archive and package types that it already supports.

The conda-forge package CI flow is excellent. There's a bot that sends a Pull Request to update the version number in the conda package feedstock meta.yml when it detects that e.g. there's a new version of the source package on e.g. PyPI. When a PR is merged, conda-forge builds on Win/Mac/Lin and publishes the package to the conda-forge package channel (`conda install -y -c conda-forge jupyterlab pandas`)

The Fedora GitOps package workflow is great too, but bugzilla isn't Markdown by default.

Keeping those APKs updated and rebuilt is work.

[-]

Ask HN: What should go in an Excel-to-Python equivalent of a couch-to-5k?

Yesterday, my co-founder published a blog about her experiences Ditching Excel for Python in her job as a Reinsurance Analyst [0].

One of the responses on reddit [1] asked what they should do, "Step 1 day 1," if having read Amy's post they were convinced to try and begin the long journey from tangled Excel/Access spaghetti.

My (flippant) reaction to a friend that brought the comment to my attention was unhelpful; "Step 1 day 1, quit." So he has challenged me to write eight helpful blog posts during the remainder of my Garden Leave.

What should go in them?

[0] https://amypeniston.com/ditching-excel-for-python/

[1] https://www.reddit.com/r/Python/comments/knbv5t/ditching_excel_for_python_lessons_learned_from_a/ghm559c/?utm_source=reddit&utm_medium=web2x&context=3

How to write functions in JS / VB script and call them from a cell expression.

How to name variables something other than AB3.

How to use physical units and datatypes. (How to specify XSD datatype URIs that map to native primitives in an additional frozen header row. e.g. py-moneyed and Pint & NumPy ufuncs)

How transitive sort works (is there a tsort to determine what to calculate first (and whether there are cycles) on every modification event?)

Which Jupyter platforms do and don't support interactive charts with e.g. ipywidgets?

pandas.df.plot(kind=) (matplotlib), seaborne (what are the calculated parameters of this chart?), holoviews, plotly, altair

Reproducibility w/ repo2docker / BinderHub:

   pip freeze > constraints.txt
   cp constraints.txt requirements.txt
   conda env export --from-history

Also,

When is it better to have code in a notebook instead of in a module?

How to export notebook cells to a module with nbdev

How to write tests to assert the quality of the code and the model: @pytest.mark.parametrize, pytest-notebook, jupyter-pytest-2, pytest-jupyter

When is it appropriate to parametrized a notebook with e.g. papermill?

How to handle concurrency: dask.distributed + dask-labextension, ipyparallel

[+]

Yeah if you port it to functions and verify that you haven't broken anything, you could then easily port to Python functions that you could call from Excel with an add-on that everyone that opens the sheet needs to have installed; but everyone that opens a sheet that calls Python must have that same extension (and all python package dependencies) installed, too

[+]
[+]

> I suspect many companies use excel workbooks as "forms" with lots of data at the same cell in multiple workbooks.

Downstream data quality costs can be minimized with data normalized schema and data collection process controls like forms-based data validation.

There are established UI/UX design patterns for data validation of user-supplied data: accessible [web] forms with tab-ordered input fields and specific per-input feedback with accessible HTML5 and ARIA. IIUC, Firefox now supports PDF forms, too?

Why would we move from a spreadsheet to an actual database?

Data integrity:

Referential integrity (making sure that record keys actually point to something when creating, updating, or deleting),

Columnar datatypes (float, decimal, USD, complex fraction),

Access controls (auth(z): authentication and authorization),

Auditing (what was the value before and after that) and Disaster Recovery,

Organizationally-unified schema development and corresponding validation.

Repeatability / Reproducibility: can you replay the steps needed to build the whole sheet? What parameters were entered and how to we script that par so that we can easily assess the relations between the terms of the argument presented?

[-]

Scientists turn CO2 into jet fuel

[+]

Yao, B., Xiao, T., Makgae, O.A. et al. "Transforming carbon dioxide into jet fuel using an organic combustion-synthesized Fe-Mn-K catalyst." Nat Commun 11, 6395 (2020). https://doi.org/10.1038/s41467-020-20214-z

[+]
[+]
[+]
[+]

You can run aircraft on electricity.

Locomotives run on electrical energy produced by diesel generators (because electric motors are more energy efficient), for example.

The limits are the cost and weight of the batteries and the charge time.

[+]

From https://en.wikipedia.org/wiki/Electric_aircraft :

> [For] large passenger aircraft, an improvement of the energy density by a factor 20 compared to li-ion batteries would be required

The time it takes to surpass this energy density threshold is affected by battery tech investments; which had been comparatively paltry in terms of defense spending. Trillions on batteries would've been a much better investment; with ROI.

Sadly, some folks in defense still can't understand why non-oil investments in battery tech are best for all.

There are multiple electric trainer aircraft with flight times over an hour and quite a few more in development.

Jet engines are terribly inefficient (30-50% efficient) compared to electric motors.

[-]

Show HN: Stork: A customizable, WASM-powered full-text search plugin for the web

jil | 2020-12-27 14:16:01 | 137 | # | ^
[+]
[+]
[+]

> Merkle Search Trees: Efficient State-Based CRDTs in Open Networks https://hal.inria.fr/hal-02303490/document

https://scholar.google.com/scholar?cites=7160577141569533185... ... "Merkle Hash Grids Instead of Merkle Trees" (2020) https://scholar.google.com/scholar?cluster=13503894708682701...

Browser-side "Blockchain Certificate Transparency" applications need to support at least exact key lookup by domain/SAN and then also by cert fingerprint value; but the whole CT chain with every cert issue and revocation event is impractically large in terms of disk space.

https://github.com/amark/gun#history may also be practically useful.

[-]

Upptime – GitHub-powered open-source uptime monitor and status page

[+]

https://news.ycombinator.com/item?id=25557032 mentions "~3000 minutes per month". GitLab's new pricing structure: [(runner_minutes, usd_per_month), (400, $0), (2_000, $4), (10_000, $19), (50_000, $99)]

You can run a self-hosted GitHub or GitLab Runner with your own resources: https://docs.github.com/en/free-pro-team@latest/actions/host...

GitLab [Runner] also runs tasks on cron schedules.

The process invocation overhead for CI is greater than for a typical metrics collection process like a nagios check or a memory-resident daemon like collectd with the curl plugin and the "Write HTTP" plugin (if you're not into using a space and time efficient timeseries database for metrics storage)

An open source project with a $5/mo VPS could run collectd in a container with a config file far far more energy efficiently than this approach.

Collectd curl statistics: https://collectd.org/documentation/manpages/collectd.conf.5....

Collect list of plugins: https://collectd.org/wiki/index.php/Table_of_Plugins

Is there a good way to do {DNS, curl HTTP, curl JSON} stats with Prometheus (instead of e.g. collectd as a minimal approach)?

[-]

Show HN: Simple-graph – a graph database in SQLite

[+]

rdflib-sqlalchemy is a SQLAlchemy rdflib graph store backend: https://github.com/RDFLib/rdflib-sqlalchemy

It also persists namespace mappings so that e.g. schema:Thing expands to http://schema.org/Thing

The table schema and indices are defined in rdflib_sqlalchemy/tables.py: https://github.com/RDFLib/rdflib-sqlalchemy/blob/develop/rdf...

You can execute SPARQL queries against SQL, but most native triplestores will have a better query plan and/or better performance.

Apache Rya, for example:

> indexes SPO, POS, and OSP.

[+]
[-]

In CPython, types implemented in C are part of the type tree

The docs should have coverage on this:

Python/C API Reference Manual: https://docs.python.org/3/c-api/index.html

Python/C API Reference Manual » Object Implementation Support > Type Objects: https://docs.python.org/3/c-api/typeobj.html

CPython Devguide > Exploring Python Internals > Additional References: https://devguide.python.org/exploring/

[-]

Experiments on a $50 DIY air purifier that takes 30s to assemble

From "Better Box Fan Air Purifier" https://tombuildsstuff.blogspot.com/2013/06/better-box-fan-a... :

> Air purifiers can be expensive and you've probably seen articles recommending to just put a 20" x 20" x 1" furnace filter on a cheap 20" box fan and POOF! instant cleaner air for not a lot of money. It really does clean the air pretty cheap.

> There's a problem with this though. These fans weren't designed to be run with a filter. The filter will restrict air flow which will put a higher strain on the motor causing it to use more electricity and in worse cases could be a fire hazard. The higher the MERV rating (cleaning efficiency) of the filter the more stress it will put on the fan.

> Don't worry! You can still have your cheap air purifier as long as the filter area is increased to decrease the effect of air resistance. Instead of using one 20x20x1 filter we'll use two 20x25x1 filters which increases the filter surface area over 250%. It's a little more expensive because you're using two filters instead of one but the increased filter surface area also helps the filter last longer before it gets clogged up and we're saving on energy use compared to a single filter.

[+]

A. Putting two [larger] filters in a 'V' with cardboard to fill the top and bottom pulls the same amount of air through a larger area of filters

B. pulling the same volume of air through greater surface area results in greater pressure between the filter and fan than one filter directly affixed to the fan

C. The lower air pressure / "suction" due to an obstructed intake causes an electric fan motor to fail more quickly.

D. Increasing the air pressure that the motor is in reduces the failure rate?

[-]

Goodreads plans to retire API access, disables existing API keys

[+]
[+]
[+]

The end of an era. Sad to see. First IMDB and now GoodReads. So much for open data. Thanks for the bait and switch. Good thing we trusted them with our data.

Welp, time to start a better book catalog site with threaded discussions that eBook page turns can be synced with.

[-]

Python Pip 20.3 Released with new resolver

[+]
[+]
[+]
[+]

Pip supports constraints files. https://pip.pypa.io/en/stable/user_guide/#constraints-files :

> Constraints files are requirements files that only control which version of a requirement is installed, not whether it is installed or not. Their syntax and contents is nearly identical to Requirements Files. There is one key difference: Including a package in a constraints file does not trigger installation of the package.

> Use a constraints file like so:

  python -m pip install -c constraints.txt

[+]
[+]
[+]

"Experience has shown"?

Did you go create a test case? Or at least link to a specific issue?

[-]

How to better ventilate your home

[+]
[+]

"Fan control with a Nest thermostat" https://support.google.com/googlenest/answer/9296419?hl=en

Looks like there could be: (1) an every hour for n minutes schedule; (2) an option to run the fan with the thermostat off; (3) an option to shut off the fan when everyone is gone

[-]

Quantum-computing pioneer Peter Shor warns of complacency over Internet security

If an organization has a 5 year refresh cycle (~time to implement a new IT system), and there exists a quantum computer with a sufficient number of error-corrected qubits by 2027 [1], an organization/industry has 5 years from 2022 to go quantum-resistant: replace their existing solution with quantum-resistant algos (and, in some cases, a DLT with a coherent pan-industry API) and/or double their RSA and ECDSA key sizes.

[1] "Quantum attacks on Bitcoin, and how to protect against them (ECDSA, SHA256)" https://news.ycombinator.com/item?id=15907523

Which DLT/blockchains without PKI (or DNS) will implement the algorithms selected from the NIST Post-Quantum Cryptography (PQC) round 3 candidate algorithms? https://csrc.nist.gov/projects/post-quantum-cryptography

[-]

CERN Online introductory lectures on quantum computing from 6 November

[+]
[+]
[+]

https://quantumalgorithmzoo.org/ lists algorithms, speedups, and descriptions.

[+]

The "linear systems" and "machine learning" algorithm paragraphs under "Optimization, Numerics, & Machine Learning" reference a number of resources in regards to currently understood limits of and applications for quantum computers and linear optimization.

[-]

A Manim Code Template

The demo video looks cool. It's maybe not obvious that there's a link to the code-video-generator (which is built on manim by 3blue1brown) demo video in the README: https://youtu.be/Jn7ZJ-OAM1g

Source: https://github.com/sleuth-io/code-video-generator

[-]

Startup Financial Modeling: What is a Financial Model? (2016)

https://www.causal.app/ has free business model templates: SaaS (Foresight), eCommerce (https://foresight.is/), Startup Runway, Buy/Rent, Ads Calculator

[+]

We'd do better to find a list of business modeling books and tools.

And then take a look at integrating actual data sources; hopefully some quantitative with APIs.

Uncertainties supports mean±"error" w/ "error propagation": https://pypi.org/project/uncertainties/

Sliders etc can be done in Jupyter notebooks with e.g. ipywidgets: https://ipywidgets.readthedocs.io/en/latest/

[-]

At what grade level do presidential candidates debate?

Intelligence does not imply superior moral, ethical, or rational judgement.

Incomplexity of speech does not imply lack of intelligence.

Here's the section on Simple English in Simple English Wikipedia: https://simple.wikipedia.org/wiki/Wikipedia:About#Simple_Eng...

Imagine being reprimanded for use of complex words and statistical terms in an evidence-based policy discussion in a boardroom. Imagine someone applying to be CEO, President, or Chairman of the Board and showing up without a laptop, any charts, or any data.

Topicality!

Perhaps there is a better game for assessing competency to practice evidence-based policy.

This commenter effectively refutes the claim that Fleisch-Kincaid is a useful metric for assessing the grade-level of interpretively-punctuated spoken language: https://news.ycombinator.com/item?id=24807610

Like I said, from "Ask HN: Recommendations for online essay grading systems?" https://news.ycombinator.com/item?id=22921064 :

> Who else remembers using the Flesch-Kincaid Grade Level metric in Word to evaluate school essays? https://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readabi...

> Imagine my surprise when I learned that this metric is not one that was created for authors to maximize: reading ease for the widest audience is not an objective in some deparments, but a requirement.

> What metrics do and should online essay grading systems present? As continuous feedback to authors, or as final judgement?

That being said, disrespectful little b will not be tolerated or venerated by the other half of the curve.

[-]

ElectricityMap – Live CO₂ emissions of electricity production and consumption

jka | 2020-10-11 14:30:27 | 221 | # | ^
[+]
[+]
[+]
[+]
[+]

How would behind the meter electricity consumption change the reported amount of CO2 emitted by other electricity production sources?

[+]

Man that was a dumb question. Thanks for clarifying.

So we'd need electric utility companies to share the live data of how many kwh of solar and wind people are selling back to the grid in order to get an accurate regional comparison of real-time carbon intensity?

FWIU, they're already parsing the EIA data; but it's significantly more delayed than the max 2 hour delay specified by ElectricityMap.

Here's the parser for the current data from EIA: https://github.com/tmrowco/electricitymap-contrib/blob/maste...

Should the EIA (1) source, aggregate, cache, and make more real-time data available; and (2) create a new data item for behind the meter kwh from e.g. residential wind and solar?

(Edit) "Does EIA publish data on peak or hourly electricity generation, demand, and prices?" https://www.eia.gov/tools/faqs/faq.php?id=100&t=3

> Hourly Electric Grid Monitor is a redesigned and enhanced version of the U.S. Electric System Operating Data tool. It incorporates two new data elements: hourly electricity generation by types of energy/fuel source and hourly sub-regional demand for certain balancing authorities in the Lower 48 states.

> [...]

> EIA does not publish hourly electricity price data, but it does publish wholesale electricity market information including daily volumes, high and low prices, and weighted-average prices on a biweekly basis.

AFAIU, retail intraday rates aren't yet really a thing in the US; but some countries in Europe do have intraday rates (which create incentives for the grid scale energy storage necessary for wide-scale rollout of renewables).

(Edit) "Introduction to the World of Electricity Trading" https://www.investopedia.com/articles/investing/042115/under... :

> Energy prices are influenced by a variety of factors that affect the supply and demand equilibrium. On the demand side, commonly referred to as a load, the main factors are economic activity, weather, and general efficiency of consumption. On the supply side, commonly referred to as generation, fuel prices and availability, construction costs and the fixed costs are the main drivers of the price of energy. There's a number of physical factors between supply and demand that affect the actual clearing price of electricity. Most of these factors are related to the transmission grid, the network of high voltage power lines and substations that ensure the safe and reliable transport of electricity from its generation to its consumption.

Which customers (e.g. data centers, mining firms) would take advantage of retail intraday rates?

How does cost and availability of storage affect the equilibrium price of electricity?

[+]
[+]
[+]

From https://github.com/tmrowco/electricitymap-contrib#data-sourc... :

> Here are some of the ways you can contribute:

> Building a new parser, Fixing a broken parser, Changes to the frontend, Find data sources, Verify data sources, Translating electricitymap.org, Updating region capacities

I sent a few tweets and emails about the data in this region but nothing happened here either

[-]

Bash Error Handling

From https://twitter.com/b0rk/status/1312413117436104705 :

> TIL that you can use the "DEBUG" trap to step through a bash script line by line

  trap '(read -p "[$BASH_SOURCE:$LINENO] $BASH_COMMAND?")' DEBUG
> [...] it does something very different than sh -x — sh -x will just print out lines, this stops before* every single line and lets you confirm that you want to run that line*

>> you can also customize the prompt with set -x

  export PS4='+(${BASH_SOURCE}:${LINENO}) '
  set -x
With a markdown_escape function, could this make for something like a notebook with ```bash fenced code blocks with syntax highlighting?

[-]

A Customer Acquisition Playbook for Consumer Startups

> For consumer companies, there are only three growth “lanes” that comprise the majority of new customer acquisition:

> 1. Performance marketing (e.g. Facebook and Google ads)

> 2. Virality (e.g. word-of-mouth, referrals, invites)

> 3. Content (e.g. SEO, YouTube)

> There are two additional lanes (sales and partnerships) which we won't cover in this post because they are rarely effective in consumer businesses. And there are other tactics to boost customer acquisition (e.g PR, brand marketing), but the lanes outlined above are the only reliable paths for long-term and sustainable business growth.

Marketing calls those "channels". I don't think they're exclusive categories: a startup's YouTube videos could be supporting a viral marketing campaign, for example; ads aren't the only strategy for (targeted) social media marketing; if the "ask" / desired behavior upon receiving the message is to share the brand, is that "viral"?

What about Super Bowel commercials?

Traditional marketing: press releases, (linked-citation-free) news wires, quasi-paid interviews, "news program" appearances, product placement.

"Growth hacking": https://en.wikipedia.org/wiki/Growth_hacking

[-]

Jupyter Notebooks Gallery

[+]

Jupyter/Jupyter > Wiki > "A gallery of interesting Jupyter Notebooks" lists hundreds of notebooks: https://github.com/jupyter/jupyter/wiki/A-gallery-of-interes...

The mybinder.org Grafana dashboard lists the most popular notebook repos in the last hour: https://grafana.mybinder.org/

Jupyter/Nbviewer > FAQ > "How do you choose the notebooks featured on the nbviewer.jupyter.org homepage?" :

> We originally selected notebooks that we found and liked. We are currently soliciting links to refresh the home page using a Google Form. You may also open an issue with your suggestion.

https://nbviewer.jupyter.org/faq#how-do-you-choose-the-noteb...

Google Form: https://docs.google.com/forms/d/e/1FAIpQLSd6AlVvC7KagENypGTc...

Here's the Nbviewer source code. AMP (And https://schema.org/ScholarlyArticle / Book / CreativeWork metadata) could be useful. https://github.com/jupyter/nbviewer

[+]
[+]

Jupytext:

> Jupyter Notebooks as Markdown Documents [MyST Markdown, R Markdown], Julia, Python or R scripts

https://github.com/mwouts/jupytext

[-]

NestedText, a nice alternative to JSON, YAML, TOML

[+]
[+]
[+]

JSON5 also supports comments and multiline strings with `\`-escaped newlines: https://json5.org/

Triple-quoted multiline strings like HJSON would be great, too.

From "The description of YAML in the README is inaccurate" https://github.com/KenKundert/nestedtext/issues/10 :

> I will mention something else. The section about the "Norway problem" is not quite accurate. Some YAML loaders do in fact load no as false. These are usually YAML 1.1 loaders. YAML 1.2's default schema is the same as JSON's (only true, false, 'null and numbers are non-strings).

> Any YAML loader is free to use any schema it wants. That is, no loader is required to to load no as false. Good loaders should support multiple schemas and custom schemas. The Norway problem isn't technically a YAML problem but a schema problem.

> imho, YAML's biggest failing to date is not making things like this clear enough to the community.

> Note: PyYAML has a BaseLoader schema that loads all scalar values as strings.

[-]

Algorithm discovers how six molecules could evolve into life’s building blocks

[+]

Folding@home https://en.wikipedia.org/wiki/Folding@home :

> Folding@home (FAH or F@h) is a distributed computing project aimed to help scientists develop new therapeutics to a variety of diseases by the means of simulating protein dynamics. This includes the process of protein folding and the movements of proteins, and is reliant on the simulations run on the volunteers' personal computers.

"AlphaFold: Using AI for scientific discovery" (2020) https://deepmind.com/blog/article/AlphaFold-Using-AI-for-sci...

https://www.kdnuggets.com/2019/07/deepmind-protein-folding-u... :

> At last year’s Critical Assessment of protein Structure Prediction competition (CASP13), researchers from DeepMind made headlines by taking the top position in the free modeling category by a considerable margin, essentially doubling the rate of progress in CASP predictions of recent competitions. This is impressive, and a surprising result in the same vein as if a molecular biology lab with no previous involvement in deep learning were to solidly trounce experienced practitioners at modern machine learning benchmarks.

Citations of "Resource-efficient quantum algorithm for protein folding" (2019) https://scholar.google.com/scholar?cites=1037213034434902738...

Protein folding: https://en.wikipedia.org/wiki/Protein_folding

[+]
[+]

"Applied CS"

Computational science: https://en.wikipedia.org/wiki/Computational_science

Computational biology: https://en.wikipedia.org/wiki/Computational_biology

Computational thinking: https://en.wikipedia.org/wiki/Computational_thinking :

> The characteristics that define computational thinking are decomposition, pattern recognition / data representation, generalization/abstraction, and algorithms.

Additional skills useful for STEM fields: system administration / DevOps / DevSecOps, HPC: High Performance Computing (distributed systems, distributed algorithms, performance optimization; rewriting code that is designed to test unknown things with tests and for performance), research a graph of linked resources and reproducibly publish in LaTeX and/or computational notebooks such as Jupyter notebooks, dask-labextension, open source tool development (& sustainable funding) that lasts beyond one grant

Physicists build circuit that generates clean, limitless power from graphene

> In the 1950s, physicist Léon Brillouin published a landmark paper refuting the idea that adding a single diode, a one-way electrical gate, to a circuit is the solution to harvesting energy from Brownian motion. Knowing this, Thibado's group built their circuit with two diodes for converting AC into a direct current (DC). With the diodes in opposition allowing the current to flow both ways, they provide separate paths through the circuit, producing a pulsing DC current that performs work on a load resistor.

> Additionally, they discovered that their design increased the amount of power delivered. "We also found that the on-off, switch-like behavior of the diodes actually amplifies the power delivered, rather than reducing it, as previously thought," said Thibado. "The rate of change in resistance provided by the diodes adds an extra factor to the power."

> The team used a relatively new field of physics to prove the diodes increased the circuit's power. "In proving this power enhancement, we drew from the emergent field of stochastic thermodynamics and extended the nearly century-old, celebrated theory of Nyquist," said coauthor Pradeep Kumar, associate professor of physics and coauthor.

> According to Kumar, the graphene and circuit share a symbiotic relationship. Though the thermal environment is performing work on the load resistor, the graphene and circuit are at the same temperature and heat does not flow between the two.

> That's an important distinction, said Thibado, because a temperature difference between the graphene and circuit, in a circuit producing power, would contradict the second law of thermodynamics. "This means that the second law of thermodynamics is not violated, nor is there any need to argue that 'Maxwell's Demon' is separating hot and cold electrons," Thibado said.

[+]

I'm not sure that I understand either. From the abstract (which phys.org failed to link to):

> The system reaches thermal equilibrium and the rates of heat, work, and entropy production tend quickly to zero. However, there is power generated by graphene which is equal to the power dissipated by the load resistor.

Looks like the article is also on ArXiV: https://arxiv.org/abs/2002.09947

https://scholar.google.com/scholar?oi=bibs&hl=en&cluster=103...

Is it really a closed system at equilibrium?

Hopefully these can be sandwiched below solar panels to harvest thermal energy from the gradient.

[-]

Mozilla shuts project Iodide: Datascience documents in browsers

I did this! I killed it and I didn't mean to.

Ten (10) days ago, I filed an issue in the iodide project: "Compatibility with 'percent' notebook format" https://github.com/iodide-project/iodide/issues/2942

And then six (6) days ago, I added this comment to that issue: https://github.com/iodide-project/iodide/issues/2942#issueco...

And now it's almost dead, and I didn't mean to kill it.

But I also suggested that it would be great if conda-forge had a WASM build target:

- "Consider moving CPython patches upstream" https://github.com/iodide-project/pyodide/issues/635#issueco...

For students, being able to go to a URL and have a notebook interface with the SciPy stack preinstalled without needing to have an organization manage shell accounts and/or e.g. JupyterHub for every student should be worth the necessary budget allocation. Their local machines have plenty of CPU, storage, and memory for all but big data workloads.

Iodide is/was really cool. Pyiodide (much of the SciPy stack compiled to WASM) is also a great idea.

Jyve with latest JupyterLab, nbgrader, and configurable cloud storage could also solve.

[+]

There are many ways to share reproducible Jupyter notebooks.

Google Colab now supports ipywidgets (js) in notebooks. While you can install additional packages in Colab, additional packages must be installed by each user (e.g. with `! pip install sympy` in an initial input cell) for each new kernel.

repo2docker builds a docker image from software dependency versions specified in e.g. requirements.txt, environment.yml, and/or a postInstall script and then installs a current version of JupyterLab in the container. Zero-to-BinderHub describes how to get BinderHub (which builds and launches containers) running on a hosting provider w/ k8s. awesome-python-in-education/blob/master/README.md#jupyter

Google AI Platform Notebooks is hosted JupyterLab.

awesome-jupyter > Hosted Notebook Solutions lists a number of services: https://github.com/markusschanta/awesome-jupyter#hosted-note...

awesome-python-in-education > Jupyter links to many Jupyter resources like nbgrader and BinderHub but not yet Jyve: https://github.com/quobit/awesome-python-in-education#jupyte...

[-]

Ask HN: What are good life skills for people to learn?

My initial thoughts; learn to drive, first aid, a sport, play an instrument, a language, how to manage finances, to speak in front of people.

- "Consumer science (a.k.a. home economics) as a college major" https://news.ycombinator.com/item?id=17894550

In no particular order:

- Food science; Nutrition

- Family planning: https://en.wikipedia.org/wiki/Family_planning

- Personal finance (see the link above for resources)

- How to learn

- How to teach [reading and writing, STEM, respect, compassion]

- Compassion for others' suffering

- How to considerately escape from unhealthy situations

- Coping strategies: https://en.wikipedia.org/wiki/Coping

- Defense mechanisms: https://en.wikipedia.org/wiki/Defence_mechanism

- Prioritization; productivity

- Goal setting; n-year planning; strategic alignment

Life skills: https://en.wikipedia.org/wiki/Life_skills

Khan Academy > Life Skills: https://www.khanacademy.org/college-careers-more

Four Keys Project metrics for DevOps team performance

> […] four key metrics that indicate the performance of a software development team:

> Deployment Frequency - How often an organization successfully releases to production

> Lead Time for Changes - The amount of time it takes a commit to get into production

> Change Failure Rate - The percentage of deployments causing a failure in production

> Time to Restore Service - How long it takes an organization to recover from a failure in production

[-]

Ask HN: Resources to encourage teen on becoming computer engineer?

Howdy HN

A teenager I am close with would like to become a computer engineer. Whet resources, books, podcasts, camps, or experiences do you recommend to support this teen's endeavor?

"Ask HN: Something like Khan Academy but full curriculum for grade schoolers?" [through undergrads] https://news.ycombinator.com/item?id=23794001

"Ask HN: How to introduce someone to programming concepts during 12-hour drive?" https://news.ycombinator.com/item?id=15454071

"Ask HN: Any detailed explanation of computer science" https://news.ycombinator.com/item?id=15270458 : topologically-sorted? Information Theory and Constructor Theory are probably at the top:

> A bottom-up (topologically sorted) computer science curriculum (a depth-first traversal of a Thing graph) ontology would be a great teaching resource.

> One could start with e.g. "Outline of Computer Science", add concept dependency edges, and then topologically (and alphabetically or chronologically) sort.

> https://en.wikipedia.org/wiki/Outline_of_computer_science

> There are many potential starting points and traversals toward specialization for such a curriculum graph of schema:Things/skos:Concepts with URIs.

> How to handle classical computation as a "collapsed" subset of quantum computation? Maybe Constructor Theory?

> https://en.wikipedia.org/wiki/Constructor_theory

https://westurner.github.io/hnlog/ ... Ctrl-F "interview", "curriculum"

[-]

CadQuery: A Python parametric CAD scripting framework based on OCCT

[+]

The jupyter-cadquery extension renders models with three.js via pythreejs in a sidebar with jupyterlab-sidecar: https://github.com/bernhard-42/jupyter-cadquery#b-using-a-do...

https://github.com/bernhard-42/jupyter-cadquery/blob/master/...

[-]

Array Programming with NumPy

Looks like there's a new citation for NumPy in town.

"Citing packages in the SciPy ecosystem" lists the existing citations for SciPy, NumPy, scikits, and other -Py things: https://www.scipy.org/citing.html ( source: https://github.com/scipy/scipy.org/blob/master/www/citing.rs... )

A better way to cite requisite software might involve referencing a https://schema.org/SoftwareApplication record in JSON-LD, RDFa, or Microdata; for example: https://news.ycombinator.com/item?id=24489651

But there's as of yet no way to publish JSON-LD, RDFa, or Microdata Linked Data from LaTeX with Computer Modern.

[+]
[+]
[+]
[+]

You can get a free DOI for and archive a tag of a Git repo with FigShare or Zenodo.

If you have repo2docker REES dependency scripts (requirements.txt, environment.yml, postInstall,) in your repo, a BinderHub like https://mybinder.org can build and cache a container image and launch a (free) instance in a k8s cloud.

Journals haven't yet integrated with BinderHub.

Putting the suggested citation and DOI URI/URL in your README and cataloging citations in an e.g. wiki page may increase the crucial frequency of citation.

A Linked Data format for presenting well-formed arguments with #StructuredPremises would help to realize the potential of the web as a graph of resources which may satisfy formal inclusion criteria for #LinkedMetaAnalyses.

[+]

We could reason about sites that index https://schema.org/ScholarlyArticle according to our own and others' observations. Google Scholar, Semantic Scholar, and Meta all index Scholarly Articles: they copy the bibliographic metadata and the abstract for archival and schoarly purposes.

AFAIU, e.g. Zotero and Mendeley do not crawl and index articles or attempt to parse bibliographic citations from the astounding plethora of citation styles [citationstyles, citationstyles_stylerepo] into a citation graph suitable for representative metrics [zenodo_newmetrics].

bitcoin.org/bitcoin.pdf does not have a DOI, does not have an ORCID [orcid], and is not published in any journal but is indexed by e.g. Google Scholar; though there are apparently multiple records referring to a ScholarlyArticle with the same name and author. Something like "Hell's Angels" (1930)? No DOI, no ORCID, no parseable PDF structure: not indexed.

AFAIU, Google Scholar does not yet index ScholarlyArticle (or SoftwareApplication < CreativeWork) bibliographic metadata. GScholar indexes an older set of bibliographic metadata from HTML <meta> tags and also attempts to parse PDFs. [gscholar_inclusion]

Google Scholar is also not (yet?) integrated with Google Dataset Search (which indexes https://schema.org/Dataset metadata).

FigShare DOIs and Zenodo DOIs are DataCite DOIs [figshare_howtocite, zenodo_principles]; which apparently aren't (yet?) all indexed by Google Scholar [rescience_gscholar].

IIUC, all papers uploaded to https://arxiv.org are indexed by Google Scholar. In order for arxiv-vanity.org [arxiv_vanity] to render a mobile-ready, font-resizeable HTML5 version of a paper uploaded to ArXiV, the PostScript source must be uploaded. Arxiv hosts certain categories of ScholarlyArticles.

JOSS (Journal of Open Source Software) has managed to get articles indexed by Google Scholar [rescience_gscholar]. They publish their costs [joss_costs]: $275 Crossref membership, DOIs: $1/paper:

> Assuming a publication rate of 200 papers per year this works out at ~$4.75 per paper

[citationstyles]: https://citationstyles.org

[citationstyles_stylerepo]: https://github.com/citation-style-language/styles

[gscholar_inclusion]: https://scholar.google.com/intl/en/scholar/inclusion.html#in...

[figshare_howtocite]: https://knowledge.figshare.com/articles/item/how-to-share-ci...

[zenodo_principles]: https://about.zenodo.org/principles/

[zenodo_newmetrics]: https://www.frontiersin.org/articles/10.3389/frma.2017.00013...

[rescience_gscholar]: https://github.com/ReScience/ReScience/issues/38

[arxiv_vanity]: https://www.arxiv-vanity.com/

[joss_costs]: https://joss.theoj.org/about#costs

[orcid]: https://en.wikipedia.org/wiki/ORCID

[-]

Do you like the browser bookmark manager?

How do you think it compares to services like webcull.com, raindrop.io, or getpocket.com? Have they advanced the field to the point that it's worth switching?

Things I'd add to browser bookmark managers someday:

- Support for (persisting) bookmarks tags. From the post re: the re-launch of del.icio.us: https://news.ycombinator.com/item?id=23985623

> "Allow reading and writing bookmark tags" https://bugzilla.mozilla.org/show_bug.cgi?id=1225916

> Notes re: how this could be standardized with JSON-LD: https://bugzilla.mozilla.org/show_bug.cgi?id=1225916#c116

> The existing Web Experiment for persisting bookmark tags: https://github.com/azappella/webextension-experiment-tags/bl...

- Standard search features like operators: ((term) AND (term2)) OR term3

- Regex search

- (Chrome) show the createdDate and allow (non-destructive) sort by date

- Native sync API for syncing to zero or more bookmarks / personal data storage providers

- Support for integration with extensions that support actual resource metadata like Zotero

- Linked Data support: extract and store bibliographic metadata like Zotero and OpenLink Structured Data Sniffer

What are the current limitations of the WebExtensions Bookmarks API (now supported by Firefox, Chrome, Edge, and hopefully eventually Safari)?: https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/Web...

[-]

NIST Samate – Source Code Security Analyzers

Additional lists of static analysis, dynamic analysis, SAST, DAST, and other source code analysis tools:

OWAP > Source Code Analysis Tools: https://owasp.org/www-community/Source_Code_Analysis_Tools

https://analysis-tools.dev/ (supports upvotes and downvotes)

analysis-tools-dev/static-analysis: https://github.com/analysis-tools-dev/static-analysis

analysis-tools-dev/dynamic-analysis: https://github.com/analysis-tools-dev/dynamic-analysis

devsecops/awesome-devsecops: https://github.com/devsecops/awesome-devsecops , https://github.com/TaptuIT/awesome-devsecops

kai5263499/awesome-container-security: https://github.com/kai5263499/awesome-container-security

https://en.wikipedia.org/wiki/DevOps#DevSecOps,_Shifting_Sec... :

> DevSecOps is an augmentation of DevOps to allow for security practices to be integrated into the DevOps approach. The traditional centralised security team model must adopt a federated model allowing each delivery team the ability to factor in the correct security controls into their DevOps practices.

awesome-safety-critical: https://awesome-safety-critical.readthedocs.io/en/latest/

[-]

A Handwritten Math Parser in 100 lines of Python

[+]
[+]

Reverse Polish notation (RPN) > Converting from infix notation https://en.wikipedia.org/wiki/Reverse_Polish_notation#Conver... > Shunting-yard algorithm https://en.wikipedia.org/wiki/Shunting-yard_algorithm

Infix notation supports parentheses.

Infix notation: 3 + 4 × (2 − 1)

RPN: 3 4 2 1 − × +

[-]

PEP – An open source PDF editor for Mac

[+]
[+]
[+]
[+]

> RFC 4122 defines a Uniform Resource Name (URN) namespace for UUIDs. A UUID presented as a URN appears as follows:[1]

> > urn:uuid:123e4567-e89b-12d3-a456-426655440000

https://en.wikipedia.org/wiki/Universally_unique_identifier#...

Version 4 UUIDs have 122 random bits (out of 128 bits total).

In Python:

  >>> import uuid
  >>> _id = uuid.uuid4()
  >>> _id.urn
  'urn:uuid:4c466878-a81b-4f22-a112-c704655fa4ee'
Whether search engines will consider a URL or a URN or a random str without dashes to be one searchable-for token is pretty ironic in terms of extracting relations between resources in a Linked Data hypergraph.

  >>> _id.hex
  '4c466878a81b4f22a112c704655fa4ee'
The relation between a resource and a Thing with a URI/URN/URL can be expressed with https://schema.org/about . In JSON-LD ("JSONLD"):

  {"@context": "https://schema.org",
   "@type": "WebPage",
   "about": {
     "@type": "SoftwareApplication",
     "identifier": "urn:uuid:4c466878-a81b-4f22-a112-c704655fa4ee",
     "url": ["", ""],
     "name": [
       "a schema.org/SoftwareApplication < CreativeWork < Thing",
       {"@value": "a rose by any other name",
        "@language": "en"}]}}
Or with RDFa:

  <body vocab="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" rel="nofollow noopener" target="_blank">https://schema.org/" typeof="WebPage">
    <div property="about" typeof="SoftwareApplication">
      <meta property="identifier" content="urn:uuid:4c466878-a81b-4f22-a112-c704655fa4ee"/>
      
      
      <span property="name">a schema.org/SoftwareApplication < CreativeWork < Thing</span>
      <span property="name" lang="en">a rose by any other name</span>
    </div>
  </body>
Or with Microdata:

  <div itemtype="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" rel="nofollow noopener" target="_blank">https://schema.org/WebPage" itemscope>
    <link itemprop="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" rel="nofollow noopener" target="_blank">http://www.w3.org/ns/rdfa#usesVocabulary" href="https://schema.org/" />
    <div itemprop="about" itemtype="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">https://schema.org/SoftwareApplication" itemscope>
      
      
      <meta itemprop="identifier" content="urn:uuid:4c466878-a81b-4f22-a112-c704655fa4ee" />
      <meta itemprop="name" content="a <a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">http://schema.org/SoftwareApplication" rel="nofollow noopener" target="_blank">schema.org/SoftwareApplication < CreativeWork < Thing"/>
      <meta itemprop="name" content="a rose by any other name" lang="en"/>
    </div>
  </div>

[-]

The Unix timestamp will begin with 16 this Sunday

It's gonna be so fun. In UTC:

  >>> import datetime
  >>> datetime.datetime.now().timestamp()
  1599923432.252943
  >>> datetime.datetime.fromtimestamp(16e8)
  datetime.datetime(2020, 9, 13, 8, 26, 40)

[-]

Redox: Unix-Like Operating System in Rust

[+]

Are there tools to support static analysis and formal methods in Rust yet?

From https://news.ycombinator.com/item?id=21839514 re: awesome-safety-critical https://awesome-safety-critical.readthedocs.io/en/latest/ :

> > Does Rust have a chance in mission-critical software? (currently Ada and proven C niches) https://www.reddit.com/r/rust/comments/5iv5j7/does_rust_have...

FWIU, Sealed Rust is in progress.

And there's also RustPython for the userspace.

[-]

Ask HN: How are online communities established?

HN, Reddit, Stack Overflow, etc. are all established communities with users. How do you start a community when you don't have any users?

[+]

Seconded. "People Powered: How Communities Can Supercharge Your Business, Brand, and Teams" (2019) https://g.co/kgs/CF5TEk

"The Art of Community: Building the New Age of Participation" (2012) https://g.co/kgs/P2V1kn

"Tribes: We need you to lead us" (2011) https://g.co/kgs/T8jaFS

The 1% 'rule' https://en.wikipedia.org/wiki/1%25_rule_(Internet_culture) :

> In Internet culture, the 1% rule is a rule of thumb pertaining to participation in an internet community, stating that only 1% of the users of a website add content, while the other 99% of the participants only lurk. Variants include the 1–9–90 rule (sometimes 90–9–1 principle or the 89:10:1 ratio),[1] which states that in a collaborative website such as a wiki, 90% of the participants of a community only consume content, 9% of the participants change or update content, and 1% of the participants add content.

... Relevant metrics:

- Marginal cost of service https://en.wikipedia.org/wiki/Marginal_cost

- Customer acquisition cost: https://en.wikipedia.org/wiki/Customer_acquisition_cost

- [Quantifiable and non-quantifiable] Customer Lifetime Value: https://en.wikipedia.org/wiki/Customer_lifetime_value

Last words of the almost-cliche community organizer surrounded by dormant accounts: "Network effects will result in sufficient (grant) funding"

Business model examples that may be useful for building and supporting sustainable communities with clear Missions, Objectives, and Criteria for Success: https://gist.github.com/ndarville/4295324

[-]

Python Documentation Using Sphinx

I usually generate new Python projects with a cookiecutter; such as cookiecutter-pypackage. I like the way that cookiecutter-pypackage includes a Makefile which has a `docs` task so that I can call `make docs` to build the sphinx docs in the docs/ directory which include:

- a /docs/readme.rst that includes the /README.rst as the first document in the toctree

- a sensible set of default documents: readme (.. include:: /README.rst), installation, usage, modules (sphinx-autodoc output), contributing, authors, history (.. include:: /HISTORY.rst)

- a sphinx conf.py that sets the docs' version and release attributes to pkgname.__version__; so that the version number only needs to be changed in one place (as long as setup.py or setup.cfg also read the version string from pkgname.__version__)

- a default set of extensions: ['sphinx.ext.autodoc', 'sphinx.ext.viewcode'] that generates API docs and includes '[source]' hyperlinks from the generated API docs to the transcluded syntax-highlighted source code and links back to the API docs from the source code

https://github.com/audreyfeldroy/cookiecutter-pypackage/tree...

There are a few styles of docstrings that Sphinx can parse and include in docs with e.g. sphinx-autodoc:

`:param, :type, :returns, :rtype` docstrings (which OP uses; and which pycontracts can read runtime parameter and return type contracts from https://andreacensi.github.io/contracts/ (though Python 3 annotations are now the preferred style for compile or editing-time typechecks))

Numpydoc docstrings: https://numpydoc.readthedocs.io/en/latest/format.html

Googledoc docstrings: https://sphinxcontrib-napoleon.readthedocs.io/en/latest/

You can use Markdown with Sphinx in at least three ways:

MyST Markdown supports Sphinx and Docutils roles and directives. Jupyter Book builds upon MyST Markdown. With Jupyter Book, you can include Jupyter notebooks (which can include MyST Markdown) in your Sphinx docs. Executable notebooks are a much easier way to include up-to-date code outputs in docs. https://myst-parser.readthedocs.io/en/latest/

Sphinx (& ReadTheDocs) w/ recommonmark: https://docs.readthedocs.io/en/stable/intro/getting-started-...

Nbsphinx predates Jupyter Book and doesn't yet support MyST Markdown, but does support Markdown cells in Jupyter notebooks. Nbsphinx includes a parser for including .ipynb Jupyter notebooks in Sphinx docs. nbsphinx supports raw RST (ReST) cells in Jupyter notebooks and has great docs: https://nbsphinx.readthedocs.io/en/latest/

Nbdev is another approach; though it's not Sphinx:

> nbdev is a library that allows you to fully develop a library in Jupyter Notebooks, putting all your code, tests and documentation in one place.

> [...] Add %nbdev_export flags to the cells that define the functions you want to include in your python modules

https://github.com/fastai/nbdev

A few additional sources of docs for Sphinx and ReStructuredText:

Read The Docs docs > Getting Started with Sphinx > External Resources https://docs.readthedocs.io/en/stable/intro/getting-started-...

CPython Devguide > "Documenting Python" https://devguide.python.org/documenting/

"How to write [Linux] kernel documentation" https://www.kernel.org/doc/html/latest/doc-guide/index.html

awesome-sphinxdoc: https://github.com/yoloseem/awesome-sphinxdoc

... "Ask HN: Recommendations for Books on Writing [for engineers]?" https://news.ycombinator.com/item?id=23945580

[-]

Traits of good remote leaders

sfg | 2020-09-10 07:18:54 | 356 | # | ^
[+]
[+]

Fortunately the references are free to view.

"Table 4 – Correlation of Development Phases, Coping Stages and Comfort Zone transitions and the Performance Model" in "From Comfort Zone to Performance Management" White (2008) tabularly correlates the Tuckman group development phases (Forming, Storming, Norming, Performing, Adjourning) with the Carnall coping cycle (Denial, Defense, Discarding, Adaptation, Internalization) and Comfort Zone Theory (First Performance Level, Transition Zone, Second Performance Level), and the White-Fairhurst TPR model (Transforming, Performing, Reforming). The ScholarlyArticle also suggests management styles for each stage (Commanding, Cooperative, Motivational, Directive, Collaborative); and suggests that team performance is described by chained power curves of re-progression through these stages.

https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%E2...

IDK what's different about online teams in regards to performance management?

[-]

Show HN: Eiten – open-source tool for portfolio optimization

Is it possible to factor (e.g. GRI) sustainability criteria into the portfolio fitness function? https://news.ycombinator.com/item?id=21922558

My concern is that - like any other portfolio optimization algorithm - blindly optimizing on fundamentals and short term returns will lead to investing in firms who just dump external costs onto people in the present and future; so, screening with sustainability criteria is important to me.

From https://news.ycombinator.com/item?id=19111911 :

> awesome-quant lists a bunch of other tools for algos and superalgos: https://github.com/wilsonfreitas/awesome-quant

[+]

(Sustainable) Index ETFs in the stocks.txt universe would likely be less sensitive to single performers' effects in unbalanced portfolios.

> pyfolio.tears.create_interesting_times_tear_sheet measures algorithmic trading algorithm performance during "stress events" https://github.com/quantopian/pyfolio/blob/03568e0f328783a6a...

[-]

Ask HN: Any well funded tech companies tackling big, meaningful problems?

Are there any well funded tech startups / companies tackling major societal problems? Any of these fair game: https://en.wikipedia.org/wiki/List_of_global_issues

----

I don't see or hear of any and want to know if this is just my bias or if there really is a shortage of resources in tech being allocated to solving the worlds most important problems. I'm sure I'm not the only engineer that's looking out for companies like this.

Ran into this previous Ask HN (https://news.ycombinator.com/item?id=24168902) that asked a similar question. However, here I wanna focus on the better funded efforts (not side projects, philanthropy etc).

One example I've heard so far is Tesla. Any others?

You can make an impact by solving important local and global problems by investing your time, career, and savings; by listing and comparing solutions.

As a labor market participant, you can choose to work for places that have an organizational mission that strategically aligns with local, domestic, and international objectives.

https://en.wikipedia.org/wiki/Strategic_alignment ... "Schema.org: Mission, Project, Goal, Objective, Task" https://news.ycombinator.com/item?id=12525141

As an investor, you can choose to invest in organizations that are making the sort of impact you're looking for: you can impact invest.

https://en.wikipedia.org/wiki/Impact_investing

You mentioned "List of global issues"; which didn't yet have a link to the UN Sustainable Development Goals (the #GlobalGoals). I just added this to the linked article:

> As part of the 2030 Agenda for Sustainable Development, the UN Millenium Development Goals (2000-2015) were superseded by the UN Sustainable Development Goals (2016-2030), which are also known as The Global Goals. There are associated Targets and Indicators for each Global Goal.

There are 17 Global Goals.

Sustainability reporting standards can align with the Sustainable Development Goals. For example, the GRI standards are now aligned with the UN Sustainable Development Goals.

https://en.wikipedia.org/wiki/Sustainable_Development_Goals

Investors, fund managers, and potential employees can identify companies which are making an impact by reviewing corporate sustainability and ESG reports.

From https://www.undp.org/content/undp/en/home/sustainable-develo... :

> SDG Target 12.6: "Encourage companies, especially large and transnational companies, to adopt sustainable practices and to integrate sustainability information into their reporting cycle"

From https://news.ycombinator.com/item?id=21302926 :

> > What are some of the corporate sustainability reporting standards?

> > From https://en.wikipedia.org/wiki/Sustainability_reporting#Initi... :

> >> Organizations can improve their sustainability performance by measuring (EthicalQuote (CEQ)), monitoring and reporting on it, helping them have a positive impact on society, the economy, and a sustainable future. The key drivers for the quality of sustainability reports are the guidelines of the Global Reporting Initiative (GRI),[3] (ACCA) award schemes or rankings. The GRI Sustainability Reporting Guidelines enable all organizations worldwide to assess their sustainability performance and disclose the results in a similar way to financial reporting.[4] The largest database of corporate sustainability reports can be found on the website of the United Nations Global Compact initiative.

> >The GRI (Global Reporting Initiative) Standards are now aligned with the UN Sustainable Development Goals (#GlobalGoals). https://en.wikipedia.org/wiki/Global_Reporting_Initiative

> >> In 2017, 63 percent of the largest 100 companies (N100), and 75 percent of the Global Fortune 250 (G250) reported applying the GRI reporting framework.[3]

What are some good ways to search for companies who (1) do sustainability reports, (2) engage in strategic alignment in corporate planning sessions, (3) make sustainability a front-and-center issue in their company's internal and external communications?

What are some examples of companies who have a focus on sustainability and/or who have developed a nonprofit organization for philanthropic missions which are sometimes best accounted for as a distinct organization or a business unit (which can accept and offer receipts for donations as a non-profit)?

How can an employee drive change in a small or a large company? Identify opportunities to deliver value and goodwill. Read through the Global Goals, Targets, and Indicators; and get into the habit of writing down problems and solutions.

3 pillars of [Corporate] Sustainability: (Environment (Society (Economy))). https://en.wikipedia.org/wiki/Sustainability#Three_dimension...

"Launch HN: Charityvest (YC S20) – Employee charitable funds and gift matching" https://news.ycombinator.com/item?id=23907902 :

> We created a modern, simple, and affordable way for companies to include charitable giving in their suite of employee benefits.

> We give employees their own tax-deductible charitable giving fund, like an “HSA for Charity.” They can make contributions into their fund and, from their fund, support any of the 1.4M charities in the US, all on one tax receipt.

> Using the funds, we enable companies to operate gift matching programs that run on autopilot. Each donation to a charity from an employee is matched automatically by the company in our system.

> A company can set up a matching gift program and launch giving funds to employees in about 10 minutes of work.

"Salesforce Sustainability Cloud Becomes Generally Available" https://news.ycombinator.com/item?id=22068522 :

> Are there similar services for Sustainability Reporting and accountability?

[-]

Column Names as Contracts

[+]
[+]

In terms of database normalization, delimiting multiple fields within a column name field violates the "atomic columns" requirement of the first though sixth normal forms (1NF - 6NF)

https://en.wikipedia.org/wiki/Database_normalization

Are there standards for storing columnar metadata (that is, metadata about the columns; or column-level metadata)?

In terms of columns, SQL has (implicit ordinal, name, type) and then primary key, index, and [foreign key] constraints.

RDFS (RDF Schema) is an open W3C linked data standard. An rdf:Property may have a rdfs:domain and a rdfs:range; where the possible datatypes are listed as instances of rdfs:range. Primitive datatypes are often drawn from XSD (XML Schema Definition), or https://schema.org/ . An rdfs:Class instance may be within the rdfs:domain and/or the rdfs:range of an rdf:Property.

RDFS is generally not sufficient for data validation; there are a number of standards which build upon RDFS: W3C SHACL (Shapes and Constraint Language), W3C CSVW (CSV on the Web).

There is some existing work on merging JSON Schema and SHACL.

CSVW builds upon the W3C "Model for Tabular Data and Metadata on the Web"; which supports arbitrary "annotations" on columns. CSVW can be represented as any RDF representation: Turtle/Trig/M3, RDF/XML, JSON-LD.

https://www.w3.org/TR/tabular-data-primer/

https://www.w3.org/TR/tabular-data-model/ :

> an annotated tabular data model: a model for tables that are annotated with metadata. Annotations provide information about the cells, rows, columns, tables, and groups of tables […]

...

From https://twitter.com/westurner/status/901992073846456321 :

> "7 metadata header rows (column label, property URI path, DataType, unit, accuracy, precision, significant figures)" https://wrdrd.github.io/docs/consulting/linkedreproducibilit...

...

From https://twitter.com/westurner/status/1295774405923147778 :

> Relevant: https://discuss.ossdata.org/ topics: "Linked Data formats, tools, challenges, opportunities; CSVW, https://schema.org/Dataset , https://schema.org/ScholarlyArticle " https://discuss.ossdata.org/t/linked-data-formats-tools-chal...

> "A dataframe protocol for the PyData ecosystem" https://discuss.ossdata.org/t/a-dataframe-protocol-for-the-p...

> A .meta protocol should implement the W3C Tabular Data Model: [...]

...

The various methods of doing CSV2RDF and R2RML (SQL / RDB to RDF Mapping) each have a way to specify additional metadata annotations. None stuff data into a column name (which I'm also guilty of doing with e.g. "columnspecs" in a small line-parsing utility called pyline that can cast columns to Python types and output JSON lines).

...

Even JSON5 is insufficient when it comes to representing e.g. complex fractions: there must be a tbox (schema) in order to read the data out of the abox (assertions; e.g. JSON). JSON-LD is sufficient for representation; and there are also specs like RDFS, SHACL, and CSVW.

Abox: https://en.wikipedia.org/wiki/Abox

[+]
[-]

Graph Representations for Higher-Order Logic and Theorem Proving (2019)

ONNX (and maybe RIF) are worth mentioning.

ONNX: https://onnx.ai/ :

> ONNX is an open format built to represent machine learning models. ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers

RIF (~FOL): https://en.wikipedia.org/wiki/Rule_Interchange_Format

Datalog (not Turing-complete): https://en.wikipedia.org/wiki/Datalog

HOList Benchmark: https://sites.google.com/view/holist/home

"HOList: An Environment for Machine Learning of Higher-Order Theorem Proving" (2019) https://arxiv.org/abs/1904.03241

> Abstract: We present an environment, benchmark, and deep learning driven automated theorem prover for higher-order logic. Higher-order interactive theorem provers enable the formalization of arbitrary mathematical theories and thereby present an interesting, open-ended challenge for deep learning. We provide an open-source framework based on the HOL Light theorem prover that can be used as a reinforcement learning environment. HOL Light comes with a broad coverage of basic mathematical theorems on calculus and the formal proof of the Kepler conjecture, from which we derive a challenging benchmark for automated reasoning. We also present a deep reinforcement learning driven automated theorem prover, DeepHOL, with strong initial results on this benchmark.

[+]
[+]
[+]

How do transformers handle with truth tables, logical connectives, and propositional logic / rules of inference, and first-order logic?

Truth table: https://en.wikipedia.org/wiki/Truth_table

Logical connective: https://en.wikipedia.org/wiki/Logical_connective

Propositional logic: https://en.wikipedia.org/wiki/Propositional_calculus

Rules of inference: https://en.wikipedia.org/wiki/Rule_of_inference

DL: Description logic: https://en.wikipedia.org/wiki/Description_logic (... The OWL 2 profiles (EL, QR, RL; DL, Full) have established decideability and complexity: https://www.w3.org/TR/owl2-profiles/ )

FOL: First-order logic: https://en.wikipedia.org/wiki/First-order_logic

HOL: Higher-order logic: https://en.wikipedia.org/wiki/Higher-order_logic

In terms of regurgitating without critical reasoning?

Critical reasoning: https://en.wikipedia.org/wiki/Critical_thinking

[+]
[+]
[-]

Show HN: Linux sysadmin course, eight years on

Almost eight years ago I launched an online “Linux sysadmin course for newbies” here at HN.

It was a side-project that went well, but never generated enough money to allow me to fully commit to leaving the Day Job. After surviving the Big C, and getting made redundant I thought I might improve and relaunch it commercially – but my doctors are a pessimistic bunch, so it looked like I didn’t have the time.

Instead, I rejigged/relaunched it via a Reddit forum this February as free and open - and have now gathered a team of helpers to ensure that it keeps going each month even after I can’t be involved any longer.

It’s a month-long course which restarts each month, so “Day 1” of September is this coming Monday.

It would be great if you could pass the word on to anyone you know who may be the target market of those who: “...aspire to get Linux-related jobs in industry - junior Linux sysadmin, devops-related work and similar”.

[0] http://www.linuxupskillchallenge.org/

[1] https://www.reddit.com/r/linuxupskillchallenge/

[2] http://snori74.blogspot.com/2020/04/health-status.html

There are a number of resources that may be useful for your curriculum for this project listed in "Is there a program like codeacademy but for learning sysadmin?" https://news.ycombinator.com/item?id=19469266 :

> [ http://www.opsschool.org/ , https://github.com/kahun/awesome-sysadmin/blob/master/README... , https://github.com/stack72/ops-books , https://landing.google.com/sre/books/ , https://response.pagerduty.com/ (Incident Response training)]

To that I'd add that K3D (based on K3S, which is now a CNCF project) runs Kubernetes (k8s) in Docker containers. https://github.com/rancher/k3d

For zero-downtime (HA: High availability) deployments, "Zero-Downtime Deployments To a Docker Swarm Cluster" describes Rolling Updates and Blue-Green Deployments; with illustrations: https://github.com/vfarcic/vfarcic.github.io/blob/master/doc...

For git-push style deployment with more of a least privileges approach (which also has more moving parts) you could take a look at: https://github.com/dokku/dokku-scheduler-kubernetes#function...

And also reference ansible molecule and testinfra for writing sysadmin tests and the molecule vagrant driver for testing docker configurations. https://www.jeffgeerling.com/blog/2018/testing-your-ansible-...

https://molecule.readthedocs.io/en/latest/

https://testinfra.readthedocs.io/en/latest/ :

> With Testinfra you can write unit tests in Python to test actual state of your servers configured by management tools like Salt, Ansible, Puppet, Chef and so on.

> Testinfra aims to be a Serverspec equivalent in python and is written as a plugin to the powerful Pytest test engine.

I wasn't able to find a syllabus or a list of all of the daily posts? Are you focusing on DevOps and/or DevSecOps skills?

EDIT: The lessons are Markdown files in a Git repo: https://github.com/snori74/linuxupskillchallenge

Links to each lesson, the title and/or subjects of the lesson, and the associated reddit posts might be useful in a Table of Contents in the README.md.

[+]

Maybe most useful as resources for further study.

Looks like Day 20 covers shell scripting. A few things worth mentioning:

You can write tests for shell scripts and write TAP (Test Anything Protocol) -formatted output: https://testanything.org/producers.html#shell

Quoting in shell scripts is something to be really careful about:

> This and this do different things:

  # prints a newline
  echo $(echo "-e a\nb")

  # prints "-e a\nb"
  echo "$(echo "-e a\nb")"
Shellcheck can identify some of those types of (security) bugs/errors/vulns in shell scripts: https://www.shellcheck.net/

LearnXinYminutes has a good bash reference: https://learnxinyminutes.com/docs/bash/

And an okay Ansible reference, which (like Ops School) we should contribute to: https://learnxinyminutes.com/docs/ansible/

Why do so many pros avoid maintaining shell scripts and writing one-off commands that they'll never remember to run again later?

...

It may be helpful to format these as Jupyter notebooks with input and output cells.

- Ctrl-Shift-Minus splits a cell at the cursor

- M and Y toggle a cell between Markdown and code

If you don't want to prefix every code cell line with a '!' so that the ipykernel Jupyter python kernel (the default kernel) executes the line with $SHELL, you can instead install and select bash_kernel; though users attempting to run the notebooks interactively would then need to also have bash_kernel installed: https://github.com/takluyver/bash_kernel

You can save a notebook .ipynb to any of a number of Markdown and non-Markdown formats https://jupytext.readthedocs.io/en/latest/formats.html#markd... ; unfortunately jupytext only auto-saves to md without output cell content for now: https://github.com/mwouts/jupytext/issues/220

You can make reveal.js slides (that do include outputs) from a notebook: https://gist.github.com/mwouts/04a6dfa571bda5cc59fa1429d1309...

With nbconvert, you can manually save an .ipynb Jupyter notebook as Markdown which includes the cell outputs w/ File > "Download as / Export Notebook as" > "Export notebook to Markdown" or with the CLI: https://nbconvert.readthedocs.io/en/latest/usage.html#conver...

    jupyter convert --to markdown
    jupyter convert --help
With Jupyter Book, you can build an [interactive] book as HTML and/or PDF from multiple Jupyter notebooks as e.g. Markdown documents https://jupyterbook.org/intro.html :

    jupyter-book build mybook/
...

From https://westurner.github.io/tools/#bash :

    type bash
    bash --help
    help help
    help type
    apropos bash
    info bash
    man bash
    
    man man
    info info
From https://news.ycombinator.com/item?id=22980353 ; this is how dotfiles work:

    info bash -n "Bash Startup Files"
  
> https://www.gnu.org/software/bash/manual/html_node/Bash-Star...

...

Re: dotfiles, losing commands that should've been logged to HISTFILE when running multiple bash sessions and why I wrote usrlog.sh: https://westurner.github.io/hnlog/#comment-20671184 (Ctrl-F for: "dotfiles", "usrlog.sh", "inputrc")

https://dotfiles.github.io/

https://github.com/webpro/awesome-dotfiles

...

awesome-sysadmin > resources: https://github.com/kahun/awesome-sysadmin#resources

[-]

Software supply chain security

Estimates of prevalence do assume detection. How would we detect that a dependency that was installed a few deployments and reboots ago was compromised?

How does the classic infosec triad (Confidentiality, Integrity, Availability) apply to software supply chain security?

Confidentiality: Presumably we're talking about open source projects; which aren't confidential. Projects may request responsible disclosure in an e.g. security.txt; and vuln reports may be confidential for at least a little while.

Integrity: Secure transport protocols, checksums, and cryptographic code signing are ways to mitigate data integrity risks. GitHub supports SSH, 2FA, and GPG keys. Can all keys in the package signature keyring be used to sign any package? Can we verify a public key over a different channel? When we specify exact versions of software dependencies, can we also record package hashes which the package installer(s) will verify?

Availability: What are the internal and external data, network, and service dependencies for the development and deployment DevSecOps workflows? Can we deploy from local package mirrors? Who is responsible for securing and updating local package mirrors? Are these service dependencies all HA? Does everything in this system also depend upon the load balancer? Does our container registry support e.g. Docker Notary (TUF)? How should we mirror TUF package repos?

See also: "Guidance for [[transparent] proxy cache] partial mirrors?" https://github.com/theupdateframework/specification/issues/1...

[+]
[+]
[+]
[+]
[+]
[-]

Mind Emulation Foundation

gk1 | 2020-09-01 13:53:23 | 93 | # | ^
[+]
[+]
[+]

Was just talking about quantum cognition and memristors (in context to GIT) a few days ago: https://news.ycombinator.com/item?id=24317768

Quantum cognition: https://en.wikipedia.org/wiki/Quantum_cognition

Memristor: https://en.wikipedia.org/wiki/Memristor

It may yet be possible to sufficiently functionally emulate the mind with (orders of magnitude more) transistors. Though, is it necessary to emulate e.g. autonomic functions? Do we consider the immune system to be part of the mind (and gut)?

Perhaps there's something like an amplituhedron - or some happenstance correspondence - that will enable more efficient simulation of quantum systems on classical silicon pending orders of magnitude increases in coherence and also error rate in whichever computation medium.

For abstract formalisms (which do incorporate transistors as a computation medium sufficient for certain tasks), is there a more comprehensive set than Constructor Theory?

Constructor theory: https://en.wikipedia.org/wiki/Constructor_theory

Amplituhedron: https://en.wikipedia.org/wiki/Amplituhedron

What is the universe using our brains to compute? Is abstract reasoning even necessary for this job?

Something worth emulating: Critical reasoning. https://en.wikipedia.org/wiki/Critical_reasoning

[-]

How close are computers to automating mathematical reasoning?

[+]
[+]

Or is automated proof search impossible for humans as well?

Arguably, humans require more energy per operation. So, presumably such an argument hinges upon what types of operations are performed in conducting automated proof search?

[+]

The task (in terms of constructor theory) is: Find the functions that sufficiently approximate the observations and record their reproducible derivations.

Either the (unreferenced) study was actually arguing that "automated proof search" can't be done at all, or that human neural computation is categorically non-algorothmic.

Grid search of all combinations of bits that correspond to [symbolic] classical or quantum models.

Or better: evolutionary algorithms and/or neural nets.

[+]

That human cognition is quantum in nature - that e.g. entanglement is necessary - may be unfalsifiable.

Neuromorphic engineering has expanded since the 1980s. https://en.wikipedia.org/wiki/Neuromorphic_engineering

Quantum computing is the best known method for simulating chemical reactions and thereby possibly also neurochemical reactions. But, Is quantum computing necessary to functionally emulate human cognition?

It may be that a different computation medium can accomplish the same tasks without emulating all of the complexity of the brain.

If the brain is only classical and some people are using their brains to perform quantum computations, there may be something there.

Quantum cognition: https://en.wikipedia.org/wiki/Quantum_cognition

Quantum memristors are still elusive.

From "Quantum Memristors in Frequency-Entangled Optical Fields" (2020) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7079656/ :

> Apart from the advantages of using these devices for computation [12] (such as energy efficiency [13], compared to transistor-based computers), memristors can be also used in machine learning schemes [14,15]. The relevance of the memristor lies in its ubiquitous presence in models which describe natural processes, especially those involving biological systems. For example, memristors inherently describe voltage-dependent ion-channel conductances in the axon membrane in neurons, present in the Hodgkin–Huxley model [16,17].

> Due to the inherent linearity of quantum mechanics, it is not straightforward to describe a dissipative non-linear memory element, such as the memristor, in the quantum realm, since nonlinearities usually lead to the violation of fundamental quantum principles, such as no-cloning theorem. Nonetheless, the challenge was already constructively addressed in Ref. [18]. This consists of a harmonic oscillator coupled to a dissipative environment, where the coupling is changed based on the results of a weak measurement scheme with classical feedback. As a result of the development of quantum platforms in recent years, and their improvement in controllability and scalability, different constructions of a quantum memristor in such platforms have been presented. There is a proposal for implementing it in superconducting circuits [7], exploiting memory effects that naturally arise in Josephson junctions. The second proposal is based on integrated photonics [19]: a Mach–Zehnder interferometer can behave as a beam splitter with a tunable reflectivity by introducing a phase in one of the beams, which can be manipulated to study the system as a quantum memristor subject to different quantum state inputs.

Quantum harmonic oscillators have also found application in modeling financial markets. Quantum harmonic oscillator: https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator

New framework for natural capital approach to transform policy decisions

Natural capital: https://en.wikipedia.org/wiki/Natural_capital

> Natural capital is the world's stock of natural resources, which includes geology, soils, air, water and all living organisms. Some natural capital assets provide people with free goods and services, often called ecosystem services. Two of these (clean water and fertile soil) underpin our economy and society, and thus make human life possible.

Natural capital accounting: https://en.wikipedia.org/wiki/Natural_capital_accounting

> Natural capital accounting is the process of calculating the total stocks and flows of natural resources and services in a given ecosystem or region.[1] Accounting for such goods may occur in physical or monetary terms. This process can subsequently inform government, corporate and consumer decision making as each relates to the use or consumption of natural resources and land, and sustainable behaviour.

Opportunity cost: https://en.wikipedia.org/wiki/Opportunity_cost

> When an option is chosen from alternatives, the opportunity cost is the "cost" incurred by not enjoying the benefit associated with the best alternative choice.[1] The New Oxford American Dictionary defines it as "the loss of potential gain from other alternatives when one alternative is chosen."[2] In simple terms, opportunity cost is the benefit not received as a result of not selecting the next best option. Opportunity cost is a key concept in economics, and has been described as expressing "the basic relationship between scarcity and choice". [3] The notion of opportunity cost plays a crucial part in attempts to ensure that scarce resources are used efficiently.[4] Opportunity costs are not restricted to monetary or financial costs: the real cost of output forgone, lost time, pleasure or any other benefit that provides utility should also be considered an opportunity cost. The opportunity cost of a product or service is the revenue that could be earned by its alternative use.

How do we value essential dependencies in terms of future opportunity costs?

In terms of just mental health?

"National parks a boost to mental health worth trillions: study" https://phys.org/news/2019-11-national-boost-mental-health-w...

> Visits to national parks around the world may result in improved mental health valued at about $US6 trillion (5.4 trillion euros), according to a team of ecologists, psychologists and economists

> Professor Bateman's decision-making framework focuses on the links between the environment and economy and has three components: efficiency, assessing which option generates the greatest benefit; sustainability, the effects of each option on natural capital stocks; and equity, regarding who receives the benefits of a decision and when.

Ian J. Bateman et al. "The natural capital framework for sustainably efficient and equitable decision making", Nature Sustainability (2020). DOI: 10.1038/s41893-020-0552-3 https://www.nature.com/articles/s41893-020-0552-3

[-]

Challenge to scientists: does your ten-year-old code still run?

"Ten Simple Rules for Reproducible Computational Research" http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fj... :

> Rule 1: For Every Result, Keep Track of How It Was Produced

> Rule 2: Avoid Manual Data Manipulation Steps

> Rule 3: Archive the Exact Versions of All External Programs Used

> Rule 4: Version Control All Custom Scripts

> Rule 5: Record All Intermediate Results, When Possible in Standardized Formats

> Rule 6: For Analyses That Include Randomness, Note Underlying Random Seeds

> Rule 7: Always Store Raw Data behind Plots

> Rule 8: Generate Hierarchical Analysis Output, Allowing Layers of Increasing Detail to Be Inspected

> Rule 9: Connect Textual Statements to Underlying Results

> Rule 10: Provide Public Access to Scripts, Runs, and Results

... You can get a free DOI for and archive a tag of a Git repo with FigShare or Zenodo.

... re: [Conda and] Docker container images https://news.ycombinator.com/item?id=24226604 :

> - repo2docker (and thus BinderHub) can build an up-to-date container from requirements.txt, environment.yml, install.R, postBuild and any of the other dependency specification formats supported by REES: Reproducible Execution Environment Standard; which may be helpful as Docker Hub images will soon be deleted if they're not retrieved at least once every 6 months (possibly with a GitHub Actions cron task)

BinderHub builds a container with the specified versions of software and installs a current version of Jupyter Notebook with repo2docker, and then launches an instance of that container in a cloud.

“Ten Simple Rules for Creating a Good Data Management Plan” http://journals.plos.org/ploscompbiol/article?id=10.1371/jou... :

> Rule 6: Present a Sound Data Storage and Preservation Strategy

> Rule 8: Describe How the Data Will Be Disseminated

... DVC: https://github.com/iterative/dvc

> Data Version Control or DVC is an open-source tool for data science and machine learning projects. Key features:

> - Simple command line Git-like experience. Does not require installing and maintaining any databases. Does not depend on any proprietary online services. Management and versioning of datasets and machine learning models. Data is saved in S3, Google cloud, Azure, Alibaba cloud, SSH server, HDFS, or even local HDD RAID.

> - Makes projects reproducible and shareable; helping to answer questions about how a model was built.

There are a number of great solutions for storing and sharing datasets.

... "#LinkedReproducibility"

[+]

The likelihood of there being a [security] bug discovered in a given software project over any significant period of time is near 100%.

It's definitely a good idea to archive source and binaries and later confirm that the output hasn't changed with and without upgrading the kernel, build userspace, execution userspace, and PUT/SUT Package/Software Under Test.

- Specify which versions of which constituent software libraries are utilized. (And hope that a package repository continues to serve those versions of those packages indefinitely). Examples: Software dependency specification formats like requirements.txt, environment.yml, install.R

- Mirror and archive all dependencies and sign the collection. Examples: {z3c.pypimirror, eggbasket, bandersnatch, devpi as a transparent proxy cache}, apt-cacher-ng, pulp, squid as a transparent proxy cache

- Produce a signed archive which includes all requisite software. (And host that download on a server such that data integrity can be verified with cryptographic checksums and/or signatures.) Examples: Docker image, statically-linked binaries, GPG-signed tarball of a virtualenv (which can be made into a proper package with e.g. fpm), ZIP + GPG signature of a directory which includes all dependencies

- Archive (1) the data, (2) the source code of all libraries, and (3) the compiled binary packages, and (4) the compiler and build userspace, and (5) the execution userspace, and (6) the kernel. Examples: Docker can solve for 1-5, but not 6. A VM (virtual machine) can solve for 1-5. OVF (Open Virtualization Format) is an open spec for virtual machine images, which can be built with a tool like Vagrant or Packer (optionally in conjunction with a configuration management tool like Puppet, Salt, Ansible).

When the application requires (7) a multi-node distributed system configuration, something like docker-compose/vagrant/terraform and/or a configuration management tool are pretty much necessary to ensure that it will be possible to reproducibly confirm the experiment output at a different point in spacetime.

[-]

A deep dive into the official Docker image for Python

[+]

> Why Tini?

> Using Tini has several benefits:

> - It protects you from software that accidentally creates zombie processes, which can (over time!) starve your entire system for PIDs (and make it unusable).

> - It ensures that the default signal handlers work for the software you run in your Docker image. For example, with Tini, SIGTERM properly terminates your process even if you didn't explicitly install a signal handler for it.

> - It does so completely transparently! Docker images that work without Tini will work with Tini without any changes.

[...]

> NOTE: If you are using Docker 1.13 or greater, Tini is included in Docker itself. This includes all versions of Docker CE. To enable Tini, just pass the `--init` flag to docker run.

https://github.com/krallin/tini#why-tini

[+]

There are Alpine [1] and Debian [2] miniconda images (within which you can `conda install python==3.8` and 2.7 and 3.4 in different conda envs)

[1] https://github.com/ContinuumIO/docker-images/blob/master/min...

[2] https://github.com/ContinuumIO/docker-images/blob/master/min...

If you build manylinux wheels with auditwheel [3], they should install without needing compilation for {CentOS, Debian, Ubuntu, and Alpine}; though standard Alpine images have MUSL instead of glibc by default, this [4] may work:

  echo "manylinux1_compatible = True" > $PYTHON_PATH/_manylinux.py

[3] https://github.com/pypa/auditwheel

[4] https://github.com/docker-library/docs/issues/904#issuecomme...

The miniforge docker images aren't yet [5][6] multi-arch, which means it's not as easy to take advantage of all of the ARM64 / aarch64 packages that conda-forge builds now.

[5] https://github.com/conda-forge/docker-images/issues/102#issu...

[6] https://github.com/conda-forge/miniforge/issues/20

There are i686 and x86-64 docker containers for building manylinux wheels that work with many distros: https://github.com/pypa/manylinux/tree/master/docker

A multi-stage Dockerfile build can produce a wheel in the first stage and install that wheel (with `COPY --from=0`) in a later stage; leaving build dependencies out of the production environment for security and performance: https://docs.docker.com/develop/develop-images/multistage-bu...

[+]

Use cases for conda or conda+pip:

- Already-compiled packages (where there may not be binary wheels) instead of requiring reinstallation and subsequent removal of e.g. build-essentials for every install

- Support for R, Julia, NodeJS, Qt, ROS, CUDA, MKL, etc.

- Here's what the Kaggle docker-python Dockerfile installs with conda and with pip: https://github.com/Kaggle/docker-python/blob/master/Dockerfi...

- Build matrix in one container with conda envs

Disadvantages of the official python images as compared with conda+pip:

- Necessary to (re)install build dependencies and a compiler for every build (if there's not a bdist or a wheel for the given architecture) and then uninstall all unnecessary transitive dependencies. This is where a [multi-stage] build of a manylinux wheel may be the best approach.

- No LSM (AppArmor, SELinux, ) for one or more processes in the container (which may have read access to /etc or environment variables and/or --privileged)

- Necessary to build basically everything on non x86[-64] architectures for every container build

Disadvantages of conda / conda+pip:

- Different package repo infrastructure to mirror

- Users complaining that they don't need conda who then proceed to re-download and re-build wheels locally multiple times a day

Additional attributes for comparison:

- The new pip solver (which is slower than the traditional iterative non-solver), conda, and mamba

- repo2docker (and thus BinderHub) can build an up-to-date container from requirements.txt, environment.yml, install.R, postBuild and any of the other dependency specification formats supported by REES: Reproducible Environment Execution Standard; which may be helpful as Docker Hub images will soon be deleted if they're not retrieved at least once every 6 months (possibly with a GitHub Actions cron task)

[+]

Here's the meta.yml for the conda-forge/python-feedstock: https://github.com/conda-forge/python-feedstock/blob/master/...

It includes patches just like distro packages often do.

[-]

The Consortium for Python Data API Standards

[+]
[+]
[+]
[+]
[+]
[+]
[+]

No, it's easy for library maintainers to offer a compat API in addition to however else they feel they need to differentiate and optimize the interfaces for array operations. People can contribute such APIs directly to libraries once instead of creating many conditionals in every library-utilizing project or requiring yet another dependency on an adapter / facade package that's not kept in sync with the libraries it abstracts.

If a library chooses to implement a spec compatability API, they do that once (optimally, as compared with somebody's hackish adapter facade which has very little comprehension of each library's internals) and everyone else's code doesn't need to have conditionals.

Each of L libraries implements a compat API: O(L)

Each of U library utilizers implements conditionals for every N places arrays are utilized: O(U x N_)

Each of U library utilizers uses the common denominator compat API: O(U)

L < U < (L + U) < (U x N_)

[-]

Tech giants let the Web's metadata schemas and infrastructure languish

It's "langushing" and they should do it for us? It's flourishing and they're doing it for us and they have lots of open issues and I want more for free without any work.

Wow! Nobody else does anything to collaboratively, inclusively develop schema and the problem is that search engines aren't just doing it for us?

1) Search engines do not owe us anything. They are not obligated to dominate us or the schema that we may voluntarily decide to include on our pages.

We've paid them nothing. They have no contract for service or agreement with us which compels them to please us or contribute greater resources to an open standard that hundreds of people are contributing to.

2) You people don't know anything about linked data and structured data.

Here's a list of schema: https://lov.linkeddata.es/dataset/lov/ .

Here's the Linked Open Data Cloud: https://lod-cloud.net/

Does your or this publisher's domain include any linked data?

Does this article include any linked data?

Do data quality issues pervade promising, comparatively-expensive, redundant approaches to natural-language comprehension, reasoning, and summarization?

Here, in contributing this example PR adding RDFa to the codeforantarctica web page, I probably made a mistake. https://github.com/CodeForAntarctica/codeforantarctica.githu... . Can you spot the mistake?

There should have been review.

https://schema.org/ClaimReview, W3C Verifiable Claims / Credentials, ld-signatures, and lds-merkleproof2017.

Which brings us to reification, truth values, property graphs, and the new RDF* and SPARQL* and JSON-LD* (which don't yet have repos with ongoing issues to tend to).

3) Get to work. This article does nothing to teach people how to contribute to slow, collaborative schema standards work.

Here's the link to the GitHub Issues so that you can contribute to schema.org: https://github.com/schemaorg/schemaorg

...

"Standards should be better and they should pay for it"

Who are the major contributors to the (W3C) open standard in question?

Is telling them to put up more money or step down going to result in getting what we want? Why or why not?

Who would merge PRs and close issues?

Have you misunderstood the scope of the project? What do the editors of the schema feel in regards to more specific domain vocabularies? Is it feasible or even advisable to attempt to out-schema domain experts who know how to develop and revise an ontology or even just a vocabulary with Protegé?

To give you a sense of how much work goes into creating a few classes and properties defined with RDFS in RDFa in HTML: here's the https://schema.org/Course , https://schema.org/CourseInstance , and https://schema.org/EducationEvent issue: https://github.com/schemaorg/schemaorg/issues/195

Can you find the link to the Use Cases wiki (which was the real work)? What strategy did you use to find it?

...

"Well, Google just does what's good for Google."

Are you arguing that Google.org should make charitable contributions to this project? Is that an advisable or effective way to influence a W3C open standard (where conflicts of interest by people just donating time are disclosed)?

Anyone can use something like extruct or OSDS to extract RDFa, Microdata, and/or JSON-LD from a page.

Everyone can include structured data and linked data in their pages.

There are surveys quantifying how many people have included which types in their pages. Some of that data is included on schema.org types pages.

...

Some written interview questions:

> Which issues have you contributed to? Which issues have you seen all the way to closed? Have you contributed a pull request to the project? Have you published linked data? What is the URL to the docs which explain how to contribute resources? How would you improve them?

https://twitter.com/westurner/status/1291903926007209984

...

After all that's happened here, I think Dan (who built FOAF, which all profitable companies could use instead of https://schema.org/Person ) deserves a week off to add more linked data to the internet now please.

[+]

schemaorg/schemaorg/CONTRIBUTING.md https://github.com/schemaorg/schemaorg/blob/main/CONTRIBUTIN... explains how you and your organization can contribute resources to the Schema.org W3C project.

If you or your organization can justify contributing one or more people at full or part time due to ROI or goodwill, by all means start sending Pull Requests and/or commenting on Issues.

"Give us more for free or step down". Wow. What PRs have you contributed to justify such demands?

https://schema.org/docs/documents.html links to the releases.

[-]

Time-reversal of an unknown quantum state

T-symmetry https://en.wikipedia.org/wiki/T-symmetry > See also links to "reversible computing" but not the "time reversal" disambiguation page?

[+]

Could there be multiple "collapsed" paths which consistently converge at the current or future measured state?

[-]

Electric cooker an easy, efficient way to sanitize N95 masks, study finds

[+]

Unfortunately the referenced NewsArticle does not link to the ScholarlyArticle https://schema.org/ScholarlyArticle :

"N95 Mask Decontamination using Standard Hospital Sterilization Technologies" (2020-04) https://www.medrxiv.org/content/10.1101/2020.04.05.20049346v... :

> We sought to test the ability of 4 different decontamination methods including autoclave treatment, ethylene oxide gassing, ionized hydrogen peroxide fogging and vaporized hydrogen peroxide exposure to decontaminate 4 different N95 masks of experimental contamination with SARS-CoV-2 or vesicular stomatitis virus as a surrogate. In addition, we sought to determine whether masks would tolerate repeated cycles of decontamination while maintaining structural and functional integrity. We found that one cycle of treatment with all modalities was effective in decontamination and was associated with no structural or functional deterioration. Vaporized hydrogen peroxide treatment was tolerated to at least 5 cycles by masks. Most notably, standard autoclave treatment was associated with no loss of structural or functional integrity to a minimum of 10 cycles for the 3 pleated mask models. The molded N95 mask however tolerated only 1 cycle. This last finding may be of particular use to institutions globally due to the virtually universal accessibility of autoclaves in health care settings.

The ScholarlyArticle referenced by and linked to by the OP NewsArticle is "Dry Heat as a Decontamination Method for N95 Respirator Reuse" (2020-07) https://pubs.acs.org/doi/full/10.1021/acs.estlett.0c00534 . Said article does not reference "N95 Mask Decontamination using Standard Hospital Sterilization Technologies" DOI: 10.1101/2020.04.05.20049346v2 . We would do well to record that (article A, seemsToConfirm, Article B) as third-party linked data (only if both articles do specifically test the efficacy of the given sterilization method with the COVID-19 coronavirus)

[+]

"Interim Recommendations for U.S. Households with Suspected or Confirmed Coronavirus Disease 2019 (COVID-19)" https://www.cdc.gov/coronavirus/2019-ncov/prevent-getting-si... :

> On the other hand, transmission of novel coronavirus to persons from surfaces contaminated with the virus has not been documented. Recent studies indicate that people who are infected but do not have symptoms likely also play a role in the spread of COVID-19. Transmission of coronavirus occurs much more commonly through respiratory droplets than through objects and surfaces, like doorknobs, countertops, keyboards, toys, etc. Current evidence suggests that SARS-CoV-2 may remain viable for hours to days on surfaces made from a variety of materials. Cleaning of visibly dirty surfaces followed by disinfection is a best practice measure for prevention of COVID-19 and other viral respiratory illnesses in households and community settings

[-]

Fed announces details of new interbank service to support instant payments

[+]
[+]
[+]

Interledger Protocol (ILP, ILPv4).

Interledger Architecture:

https://interledger.org/rfcs/0001-interledger-architecture/#... :

> For purposes of Interledger, we call all settlement systems ledgers. These can include banks, blockchains, peer-to-peer payment schemes, automated clearing house (ACH), mobile money institutions, central-bank operated real-time gross settlement (RTGS) systems, and even more.

[...]

> Interledger provides for secure payments across multiple assets on different ledgers. The architecture consists of a conceptual model for interledger payments, a mechanism for securing payments, and a suite of protocols that implement this design.

> The Interledger Protocol (ILP) is the core of the Interledger protocol suite. Colloquially, the whole Interledger stack is sometimes referred to as "ILP". Technically, however, the Interledger Protocol is only one layer in the stack.

> Interledger is not a blockchain, a token, nor a central service. Interledger is a standard way of bridging financial systems. The Interledger architecture is heavily inspired by the Internet architecture described in RFC 1122, RFC 1123 and RFC 1009.

[...]

> You can envision the Interledger as a graph where the points are individual nodes and the edges are accounts between two parties. Parties with only one account can send or receive through the party on the other side of that account. Parties with two or more accounts are connectors, who can facilitate payments to or from anyone they're connected to.

> Connectors [AKA routers] provide a service of forwarding packets and relaying money, and they take on some risk when they do so. In exchange, connectors can charge fees and derive a profit from these services. In the open network of the Interledger, connectors are expected to compete among one another to offer the best balance of speed, reliability, coverage, and cost.

ILP > Peering, Clearing and Settling: https://interledger.org/rfcs/0032-peering-clearing-settlemen...

ILP > Simple Payment Setup Protocol (SPSP): https://interledger.org/rfcs/0009-simple-payment-setup-proto...

> This document describes the Simple Payment Setup Protocol (SPSP), a basic protocol for exchanging payment information between payee and payer to facilitate payment over Interledger. SPSP uses the STREAM transport protocol for condition generation and data encoding.

> (Introduction > Motivation) STREAM does not specify how payment details, such as the ILP address or shared secret, should be exchanged between the counterparties. SPSP is a minimal protocol that uses HTTPS for communicating these details.

[...]

  GET /.well-known/pay HTTP/1.1
  Host: example.com
  Accept: application/spsp4+json,  application/spsp+json

[-]

Shrinking deep learning’s carbon footprint

"Unlearning" is one algorithmic approach that may yield substantial energy consumption gains.

With many deep learning models, it's not possible to determine when or from what source something was learned: it's not possible to "back out" a change to the network and so the whole model has to be re-trained from scratch; which is O(n) instead of O(1.x).

The article covers software approaches (more energy-efficient algorithms) and mentions GPUs but not TPUs or ASICs.

Specialized chips (built with dynamic fabrication capacities) are far more energy efficient for specific types of workloads. We see this with mining ASICs, SSL accelerators, and also with Tensor Processing Units (for deep learning).

The externalities of energy production are the ultimate concern. If you're using cheap, clean energy with minimized external costs ("sustainable energy"), the energy-efficiency of the algorithm and the chips is of much less concern.

Could we recognize products, services, and data centers that were produced with and/or run on directly sourced clean energy as "200% Green"; with a logo on the box and/or the footer? 100% offset by PPAs is certainly progress.

[-]

Show HN: Starboard – Fully in-browser literate notebooks like Jupyter Notebook

[+]

Neat! There's a project called Jyve that compiles Jupyter Lab to WASM (using iodide). https://github.com/deathbeds/jyve There are kernels for JS, CoffeeScript, Brython, TypeScript, and P5. FWIU, the kernels are marked as unsafe because, unfortunately, there seems to be no good way to sandbox user-supplied notebook code from the application instance. The README describes some of the vulnerabilities that this entails.

The jyve project issues discuss various ideas for repacking Python packages beyond the set already included with Pyodide and supporting loading modules from remote sources.

https://developer.mozilla.org/en-US/docs/Web/Security/Subres... : "Subresource Integrity (SRI) is a security feature that enables browsers to verify that resources they fetch (for example, from a CDN) are delivered without unexpected manipulation. It works by allowing you to provide a cryptographic hash that a fetched resource must match."

There's a new Native Filesystem API: "The new Native File System API allows web apps to read or save changes directly to files and folders on the user's device." https://web.dev/native-file-system/

We'll need a way to grant specific URLs specific, limited amounts of storage.

https://github.com/iodide-project/pyodide :

> The Python scientific stack, compiled to WebAssembly

> [...] Pyodide brings the Python 3.8 runtime to the browser via WebAssembly, along with the Python scientific stack including NumPy, Pandas, Matplotlib, parts of SciPy, and NetworkX. The packages directory lists over 35 packages which are currently available.

> Pyodide provides transparent conversion of objects between Javascript and Python. When used inside a browser, Python has full access to the Web APIs.

https://github.com/deathbeds/jyve/issues/46 :

> Would miniforge and conda-forge build a WASM architecture target?

> Emscripten or WASI?

[-]

Ask HN: Learning about distributed systems?

I used to love Operating Systems during my undergrads, Modern Operating Systems by Tanenbaum is till date the only academic book I've read entirely. I recently read an article about how Amazon built Aurora by Werner Vogels and I was captivated by it. I want to start reading about Distributed Systems. What would be a good start/Road Map?

[+]

> "Designing Data-Intensive Applications” by Martin Kleppman: https://dataintensive.net/ https://g.co/kgs/xJ73FS

From a previous question re: "Ask HN: CS papers for software architecture and design?" (https://news.ycombinator.com/item?id=15778396 and distributed systems we eventually realize were needed in the first place:

> Bulk Synchronous Parallel: https://en.wikipedia.org/wiki/Bulk_synchronous_parallel .

Many/most (?) distributed systems can be described in terms of BSP primitives.

> Paxos: https://en.wikipedia.org/wiki/Paxos_(computer_science) .

> Raft: https://en.wikipedia.org/wiki/Raft_(computer_science) #Safety

> CAP theorem: https://en.wikipedia.org/wiki/CAP_theorem .

Papers-we-love > Distributed Systems: https://github.com/papers-we-love/papers-we-love/tree/master...

awesome-distributed-systems also has many links to theory: https://github.com/theanalyst/awesome-distributed-systems

- Byzantine fault: https://en.wikipedia.org/wiki/Byzantine_fault :

> A [Byzantine fault] is a condition of a computer system, particularly distributed computing systems, where components may fail and there is imperfect information on whether a component has failed. The term takes its name from an allegory, the "Byzantine Generals Problem",[2] developed to describe a situation in which, in order to avoid catastrophic failure of the system, the system's actors must agree on a concerted strategy, but some of these actors are unreliable.

awesome-bigdata lists a number of tools: https://github.com/onurakpolat/awesome-bigdata

Practically, dask.distributed (joblib -> SLURM,), dask ML, dask-labextension (a JupyterLab extension for dask), and the Rapids.ai tools (e.g. cuDF) scale from one to many nodes.

Not without a sense of irony, as the lists above list many papers that could be readings with quizzes,

Distributed systems -> Distributed computing: https://en.wikipedia.org/wiki/Distributed_computing

Category: Distributed computing: https://en.wikipedia.org/wiki/Category:Distributed_computing

Category:Distributed_computing_architecture : https://en.wikipedia.org/wiki/Category:Distributed_computing...

DLT: Distributed Ledger Technology: https://en.wikipedia.org/wiki/Distributed_ledger

Consensus (computer science) https://en.wikipedia.org/wiki/Consensus_(computer_science)

[-]

Ask HN: How can I “work-out” critical thinking skills as I age?

As I get older, I realized I’m not as sharp as I used to be. Maybe it’s from the fatigue of juggling 2 kids, but I’m very ill prepared for interviews because I simply can’t answer “product questions” and brain teasers. It’s a skill I need, and truthfully I was never good at consultant type questions to begin with but I’m seeing a lot of these questions in Data Science interviews.

Any help or resources will be tremendously appreciated.

Problem solving: https://en.wikipedia.org/wiki/Problem_solving

Critical thinking: https://en.wikipedia.org/wiki/Critical_thinking

Computational Thinking: https://en.wikipedia.org/wiki/Computational_thinking

> 1. Problem formulation (abstraction);

> 2. Solution expression (automation);

> 3. Solution execution and evaluation (analyses).

Interviewers may be more interested in demonstrating problem solving methods and f thinking aloud than an actual solution in an anxiety-producing scenario.

https://en.wikipedia.org/wiki/Brilliant_(website) ;

> Brilliant offers guided problem-solving based courses in math, science, and engineering, based on National Science Foundation research supporting active learning.[14]

Coding Interview University: https://github.com/jwasham/coding-interview-university

Programmer Competency Matrix: https://github.com/hltbra/programmer-competency-checklist

Inference > See also: https://en.wikipedia.org/wiki/Inference

- Deductive reasoning: https://en.wikipedia.org/wiki/Deductive_reasoning

- Inductive reasoning: https://en.wikipedia.org/wiki/Inductive_reasoning

> This is the [open] textbook for the Foundations of Data Science class at UC Berkeley: "Computational and Inferential Thinking: The Foundations of Data Science" http://inferentialthinking.com/

[-]

The tragedy of FireWire: Collaborative tech torpedoed by corporations

Due to DMA (Direct Memory Access) in most implementations, IEEE 1394 ("FireWire") can be used to directly read from and write to RAM.

See: IEEE 1394 > Security issues https://en.wikipedia.org/wiki/IEEE_1394#Security_issues

FWIU, USB 3 is faster than FireWire; there are standard, interchangeable USB connectors and adapters; and USB implementations do not use DMA. https://en.wikipedia.org/wiki/USB_3.0

[+]

So your argument is that not security but cost is the reason that USB "won" the external device interface competition with FireWire?

Good to know that USB4 implementations are making the same mistake as FireWire implementors did in choosing performance over security . Unfortunately it looks like there will be no alternative except for maybe to use a USB3 hub (or an OS with fuzzed IOMMU and also controller firmwares)?

Could an NX bit for data coming from buses with and without DMA help at all?

Hot gluing external ports now seems a bit more rational and justified for systems where physical access is less controlled.

[+]

I read much of the article (which assumed that "FireWire" failed because of issues with suppliers failing to work together instead of waning demand (due in part to corporate customers' knowledge of the security risks of most implementations)).

Thanks for the info on USB-4, DMA, IOMMU.

IOMMU: https://en.wikipedia.org/wiki/Input%E2%80%93output_memory_ma...

Looks like there are a number of iommu Linux kernel parameters: https://www.kernel.org/doc/html/latest/admin-guide/kernel-pa...

Wonder what the defaults are and what the comparable parameters are for common consumer OSes.

Looks like NX bit support is optional in IOMMUs.

Can I configure the amount of RAM allocated to this?

[+]

Thanks again.

[-]

The Developer’s Guide to Audit Logs / SIEM

This article suggests that there should be separate data collection systems for: analytics, SIEM logs, and performance metrics.

The article mentions the CEF (Common Event Format) standard but not syslog or GELF or other JSON formats.

[ArcSight] Common Event Format [PDF]: https://kc.mcafee.com/resources/sites/MCAFEE/content/live/CO...

GELF: Graylog Extended Log Format: https://docs.graylog.org/en/latest/pages/gelf.html

Wikipedia > Syslog lists a few limitations of Syslog (no message delivery confirmation, though there is a reliable delivery RFC; and insufficient payload standardization) and also links to the existing Syslog RFCs. https://en.wikipedia.org/wiki/Syslog

Are push-style systems ideal for security logshipping systems? What sort of a message broker is ideal? AMQP has reliable delivery; while, for example, ZeroMQ does not and will drop messages due to resource exhaustion.

Developers simply need an API for their particular framework to non-blockingly queue and then log structs to a remote server. This typically means moving beyond a single-threaded application architecture so that the singular main [green] thread is not blocked when the remote log server is not responding.

SIEM: Security information and event management: https://en.wikipedia.org/wiki/Security_information_and_event...

[-]

Del.icio.us

kome | 2020-07-29 04:26:06 | 1649 | # | ^

The Firefox (and Chromium) bookmarks storage and sync systems still don't persist tags!

"Allow reading and writing bookmark tags" https://bugzilla.mozilla.org/show_bug.cgi?id=1225916

Notes re: how this could be standardized with JSON-LD: https://bugzilla.mozilla.org/show_bug.cgi?id=1225916#c116

The existing Web Experiment for persisting bookmark tags: https://github.com/azappella/webextension-experiment-tags/bl...

[-]

Ask HN: Recommendations for Books on Writing?

I want to propose a book club for writing as an engineer. Writing is fundamentally and critically important, but it seems that we don't emphasize it as much as we should for engineers (outside Amazon, where apparently it is a prominent member of the leadership pantheon).

I'm interested in any suggestions that HN has for great books on writing as an engineer! Accessibility and ease are important factors for a book club as well.

Technical Writing: https://en.wikipedia.org/wiki/Technical_writing

Google Technical Writing courses (1 & 2) and resources: https://developers.google.com/tech-writing :

- Google developer documentation style guide: https://developers.google.com/style

- Microsoft Writing Style Guide: https://docs.microsoft.com/en-us/style-guide/welcome/

Season of Docs is a program where applicants write documentation for open source projects: https://developers.google.com/season-of-docs/

Many open source projects are happy to accept necessary contributions of docs and editing; but do keep in mind that maintaining narrative documentation can be far more burdensome than maintaining API documentation that's kept next to the actual code. Systems like doxygen, epidoc, javadoc, and sphinx-apidoc enable developers to generate API documentation for a particular version of the software project as one or more HTML pages.

ReadTheDocs builds documentation from ReStructuredText and now also Markdown sources using Sphinx and the ReadTheDocs Docker image. ReadTheDocs organizes docs with URLs of the form <projectname>.rtfd.io/<language>/<version|latest>: https://docs.readthedocs.io/en/latest/ . The ReadTheDocs URL scheme reduces the prevalence of broken external links to documentation; though authors are indeed free to delete and rename docs pages and change which VCS tags are archived with RTD.

Write the Docs is a conference for technical documentation authors which is supported in part by ReadTheDocs: https://www.writethedocs.org/

Write the Docs > Learning Resources > All our videos and articles: https://www.writethedocs.org/topics/ :

> This page links to the topics that have been covered by conference talks or in the newsletter.

You might say that UX (User Experience) includes UI design and marketing: the objective is to imagine yourself as a customer experiencing the product or service afresh.

Writing dialogue is an activity we often associate more with creative writing exercises; where the objective is to meditate upon compassion for others.

One must imagine themself as ones/people/persons who interact with the team.

Cognitive walkthrough: https://en.wikipedia.org/wiki/Cognitive_walkthrough

The William Golding, Jung, and Joseph Campbell books on screenwriting, archetypes, and the hero's journey monomyth are excellent; if you're looking for creative writing resources.

[-]

Ask HN: How did you learn x86-64 assembly?

I'm an experienced C/C++ programmer and I occasionally look at the generated assembly to check for optimizations, loop unrolling, vectorization, etc. I understand what's going on the surface level, but I have a hard time understand what's going on in detail, especially with high optimization levels, where the compiler would do all kinds of clever tricks. I experiment with code in godbolt.org and look up the various opcodes, but I would like to take a more structured way of learning x86-64 assembly, especially when it comes to common patterns, tips and tricks, etc.

Are there any good books or tutorials you can recommend which go beyond the very beginner level?

High Level Assembly (HLA) https://en.wikipedia.org/wiki/High_Level_Assembly

> HLA was originally conceived as a tool to teach assembly language programming at the college-university level. The goal is to leverage students' existing programming knowledge when learning assembly language to get them up to speed as fast as possible. Most students taking an assembly language programming course have already been introduced to high-level control flow structures, such as IF, WHILE, FOR, etc. HLA allows students to immediately apply that programming knowledge to assembly language coding early in their course, allowing them to master other prerequisite subjects in assembly before learning how to code low-level forms of these control structures. The book The Art of Assembly Language Programming by Randall Hyde uses HLA for this purpose

Web: https://plantation-productions.com/Webster/

Book: "The Art of Assembly Language Programming" https://plantation-productions.com/Webster/www.artofasm.com/

Portable, Opensource, IA-32, Standard Library: https://sourceforge.net/projects/hla-stdlib/

"12.4 Programming in C/C++ and HLA" in the Linux 32 bit edition: https://plantation-productions.com/Webster/www.artofasm.com/...

... A chapter(s) about wider registers, WASM, and LLVM bitcode etc might be useful?

... Many awesome lists link to OllyDbg and other great resources for ASM; like such as ghidra: https://www.google.com/search?q=ollydbg+site%3Agithub.com+in...

[-]

Brain connectivity levels are equal in all mammals, including humans: study

hhs | 2020-07-22 09:39:11 | 197 | # | ^
[+]
[+]

"fNIRS Compared with other neuroimaging techniques" https://en.wikipedia.org/wiki/Functional_near-infrared_spect...

> When comparing and contrasting these devices it is important to look at the temporal resolution, spatial resolution, and the degree of immobility.

[+]

OP suggests that the spatial resolution of existing MRI neuroimaging capabilities is insufficient to observe or so characterize or so generalize about neuronal activity in mammalian species. fNIRS (functional near-infrared spectroscopy) is one alternative neuroimaging capability that we could compare fMRI with according to the criteria for comparison suggested in the cited Wikipedia article: "temporal resolution, spatial resolution, and the degree of immobility".

[-]

Ask HN: Resources to start learning about quantum computing?

edu | 2020-07-22 04:21:32 | 185 | # | ^

Hi there,

I'm an experienced software engineer (+15 years dev experience, MsC in Computer Science) and quantum computing is the first thing in my experience that is being hard to grasp/understand. I'd love to fix that ;)

What resources would you recommend to start learning about quantum computing?

Ideally resources that touch both the theoretical base and evolve to more practical usages.

[-]

Launch HN: Charityvest (YC S20) – Employee charitable funds and gift matching

Stephen, Jon, and Ashby here, the co-founders of Charityvest (https://charityvest.org). We created a modern, simple, and affordable way for companies to include charitable giving in their suite of employee benefits.

We give employees their own tax-deductible charitable giving fund, like an “HSA for Charity.” They can make contributions into their fund and, from their fund, support any of the 1.4M charities in the US, all on one tax receipt.

Using the funds, we enable companies to operate gift matching programs that run on autopilot. Each donation to a charity from an employee is matched automatically by the company in our system.

A company can set up a matching gift program and launch giving funds to employees in about 10 minutes of work.

Historically, corporate charitable giving matching programs have been administratively painful to operate. Making payments to charities, maintaining tax records, and doing due diligence on charitable compliance is taxing on HR / finance teams. The necessary software to help has historically been quite expensive and not very useful for employees beyond the matching features.

This is one example of an observation Stephen made after working for years as a philanthropic consultant. Consumer fintech products aren’t built to make great giving experiences for donors. Instead, they are built for buyers — e.g., nonprofits (fundraising) or corporations (gift matching) — without a ton of consideration for the everyday user experience.

A few years back, my wife and I made a commitment to give a portion of our income away every year, and we found it administratively painful to give regularly. The tech that nonprofits typically use hardly inspires generosity — e.g., high fees, poor user flows, and questionable information flow (like tax receipts). Giving platforms try to compensate for poor functionality with bright pictures of happy kids in developing countries, but when the technology is not a good financial experience it puts a damper on things.

Charityvest started when I noticed a particular opportunity with donor-advised funds, which are tax-deductible giving funds recognized by the IRS. They are growing quickly (20% CAGR), but mainly among the high-net worth demographic. We believe they are powerful tools. They enable donors to have a giving portfolio all from one place (on one tax receipt) and have full control over their payment information/frequency, etc. Most of all, they enable a donor to split the decisions of committing to give and supporting a specific organization. Excitement about each of these decisions often strikes at different times for donors—particularly those who desire to give on a budget.

We believe everyone should have their own charitable giving fund no matter their net worth. We’ve created technology that has democratized donor-advised funds.

We also believe good technology should be available for every company, big and small. Employers can offer Charityvest for $2.49 / employee / month subscription, and we charge no fees on any of the giving — charities receive 100% of the money given.

Lastly, we send the program administrator a fun report every month to let them know all the awesome giving their company and its employees did in one dashboard. This info can be leveraged for internal culture or external brand building.

We’re just launching our workplace giving product, but we’ve already built a good portfolio of trusted customers, including Eric Ries’ (author of The Lean Startup) company, LTSE. We’ve particularly seen a number of companies use us as a meaningful part of their corporate decision to join the fight for racial justice in substantive ways.

Our endgame is that the world becomes more generous, starting with the culture of every company. We believe giving is fundamentally good and we want to build technology that encourages more of it by making it more simple and accessible.

You can check out our workplace giving product at (https://charityvest.org/workplace-giving). If you’re interested, we can get your company up and running in 10 minutes. Or, please feel free to forward us on to your HR leadership at your company.

Our giving funds are also available for free for any individual on https://charityvest.org — without gift matching and reporting. We’d invite you to check out the experience. For individuals, we make gifts of cash and stock to any charity fee-free.

Happy to share this with you all, and we’d love to know what you think.

What a great idea!

Are there two separate donations or does it add the company's name after the donor's name? Some way to notify recipients about the low cost of managing a charitable donation match program with your service would be great.

Have you encountered any charitable foundations which prefer to receive cryptoassets? Red Cross and UNICEF accept cryptocurrency donations for the children, for example.

Do you have integration with other onboarding and HR/benefits tools on your roadmap? As a potential employee, I would like to work for a place that matches charitable donations, so mentioning as much in job descriptions would be helpful.

[+]

> Our matching system issues an identical grant from the fund of the matching company. It goes out in the same grant cycle as the employee grant so they go together.

So the system creates a separate transaction for the original and the matched donation with each donor's name on the respective gift?

How do users sync which elements of their HR information with your service? IDK what the monthly admin cost there is.

There are a few HR, benefits, contracts, and payroll YC companies with privacy regulation compliance and APIs https://www.ycombinator.com/companies/?query=Payroll

https://founderkit.com/people-and-recruiting/health-insuranc...

[+]

Thanks for clarifying.

Do you offer a CSV containing donor information to the charity?

Do you support anonymous matched donations?

Can donors specify that a donation is strongly recommended for a specific effort?

...

3% * $1000/yr == $2.50/mo * 12mo

[+]

Outstanding. CSV would be helpful for recognizing donors in e.g. annual and ESG/CSR reports.

It may be helpful to integrate with charity evaluation services to help donors assess various opportunities to give.

Charity Navigator > Evaluation method https://en.wikipedia.org/wiki/Charity_Navigator#Evaluation_m...

[+]
[-]

We Need a Yelp for Doctoral Programs

How are the data needs for such a doctoral and post-doctoral evaluation program different from the data needs for https://collegescorecard.ed.gov ?

Data: https://collegescorecard.ed.gov/data/

Data documentation: https://collegescorecard.ed.gov/data/documentation/

[+]
[+]
[+]
[+]
[+]
[-]

All of the World’s Money and Markets in One Visualization

> Derivatives top the list, estimated at $1 quadrillion or more in notional value according to a variety of unofficial sources.

1 Quadrillion: 1,000,000,000,000,000 (10^15)

Derivative (finance) https://en.wikipedia.org/wiki/Derivative_(finance)

Derivatives market https://en.wikipedia.org/wiki/Derivatives_market :

> The market can be divided into two, that for exchange-traded derivatives and that for over-the-counter derivatives.

[-]

Why companies lose their best innovators (2019)

hhs | 2020-07-18 21:06:28 | 190 | # | ^

https://news.ycombinator.com/item?id=23886158

> Three reasons companies lose their best innovators.

> 1. They fail to recognize and support the innovators

> 2. Innovation becomes a herculean task

> 3. Corporations don’t match rewards with outcomes

While the paragraphs under point 2 do discuss risk and the paragraphs under point 3 do discuss rewards, I'm not sure this article belongs here.

Risk and Reward.

Large corporations are able to pay people by doing things at scale; with sufficient margin at sufficient volume to justify continued investment. Risk is minimized by focusing on ROI.

Startups assume lots of risk and lots of debt and most don't make it. Liquidation preference applies as the startup team adjourns (and maybe open-sources what remains). In a large corporation, that burnt capital is reported to the board (which represents the shareholders) who aren't "gambling" per se. "You win some and you lose some" is across the street; and they don't have foosball and snacks.

How can large organizations (nonprofit, for-profit, governmental) foster intrapreneurial mindsets without just continuing to say "innovation" more times and expecting things to happen? Drink. "Innovators welcome!". Drink water.

"Intrapreneurial." What does that even mean? The employee, within their specialized department, spends resources (time, money, equipment) on something that their superior managers have not allocated funding for because they want: (a) recognition; (b) job security; (c) to save resources such as time and money; (d) to work on something else instead of this wasteful process; (e) more money.

Very few organizations have anything like "20% time". Why was 20% time thrown off the island to a new island where they had room to run? Do they have foosball? Or is the work so fun that they don't even need foosball? Or is it worth long days and nights because the potential return is enough money to retire tomorrow and then work on what?

Step 1. Steal innovators to work on our one thing

Step 2.

Step 3. Profit.

20% Project: https://en.wikipedia.org/wiki/20%25_Project

Intrapreneurship: https://en.wikipedia.org/wiki/Intrapreneurship

Internal entrepreneur: https://en.wikipedia.org/wiki/Internal_entrepreneur

CINO: Chief Innovation Officer / CTIO: Chief Technology Innovation Officer https://en.wikipedia.org/wiki/Chief_innovation_officer

... Is acquiring innovation and bringing it to scale a top-down process? How do we capture creative solutions and then allocate willing and available resources to making that happen?

awesome-ideation-tools: https://github.com/zazaalaza/awesome-ideation-tools

[-]

Powerful AI Can Now Be Trained on a Single Computer

[+]
[+]

Isn't this what Proof of Work incentivizes? Energy efficiency over transistor count.

[-]

Ask HN: Something like Khan Academy but full curriculum for grade schoolers?

Khan Academy continually gets held up as a great resource for online courses across the age spectrum for math related subjects. With the continuing pandemic continuing to grow in the US and schools not really sure how to handle things, the GF and I are looking into other options.

Is there a recommended resource that gives unbiased (as possible) reviews for middle school (7-8th grade) curriculum? Searching these days really doesn't bring up quality, just options one has to comb through.

[-]

AutoML-Zero: Evolving Code That Learns

[+]

"AutoML-Zero: Evolving Machine Learning Algorithms From Scratch" (2020) https://arxiv.org/abs/2003.03384 https://scholar.google.com/scholar?cluster=11748751662887361...

How does this compare to MOSES (OpenCog/asmoses) or PLN? https://github.com/opencog/asmoses https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%22... (2007)

Is this symbolic AI and/or a (deep learning) neural network?

[-]

SymPy - a Python library for symbolic mathematics

[+]
[+]

NumPy for Matlab users: https://numpy.org/doc/stable/user/numpy-for-matlab-users.htm...

SymPy vs Matlab: https://github.com/sympy/sympy/wiki/SymPy-vs.-Matlab

If you then or later need to do distributed ML, it is advantageous to be working in Python. Dask Distributed, Dask-ML, RAPIDS.ai (CuDF), PyArrow, xeus-cling

[+]
[+]

SymEngine https://github.com/symengine/symengine

> SymEngine is a standalone fast C++ symbolic manipulation library. Optional thin wrappers allow usage of the library from other languages, e.g.:

> [...] Python wrappers allow easy usage from Python and integration with SymPy and Sage (the symengine.py repository)

https://en.wikipedia.org/wiki/SymPy > Related Projects:

> SymEngine: a rewriting of SymPy's core in C++, in order to increase its performance. Work is currently in progress to make SymEngine the underlying engine of Sage too

[+]
[+]

There's a lot of overlap and there are syntactical differences. SymPy is included in the CoCalc image. SageMath is now conda-installable.

Things that the SageMath CAS can do that SymPy cannot yet:

- solve multivariate systems of inequalities

[-]

Ask HN: Are there any messaging apps supporting Markdown?

I'd like to easily send formatted code, and bullet points, etc. through a messaging app without having to resort to a heavy app like Slack.

Mattermost supports CommonMark Markdown: https://docs.mattermost.com/help/messaging/formatting-text.h...

Zulip supports ~CommonMark Markdown: https://zulip.readthedocs.io/en/latest/subsystems/markdown.h...

Reddit supports Markdown. https://www.reddit.com/wiki/markdown

Discourse now supports CommonMark Markdown.

GitHub, BitBucket, GitLab and Gogs/Gitea support Markdown.

[+]

I digress on the category definition. Public messaging (without PM or DM features) is still messaging; and often far more useful than trying to forward 1:1 messages in order to bring additional participants onboard.

It's worth noting that GH/BB/GL have all foregone PM features; probably for the better in terms of productivity: messaging @all is likely more productive.

[-]

What vertical farming and ag startups don't understand about agriculture

[+]
[+]
[+]
[+]
[+]
[+]
[+]

>> "Actually no not really. Plants only absorb two wavelengths of light. It's currently more efficient to convert sun into solar power via panels and then to light LEDs supplying only the wavelengths that plants use. Despite the seeming inefficiency here, the fact is that plants are even more inefficient at absorbing light not at the right wavelengths than solar panels."

> Could one imagine a material that would absorb solar spectrum and emit the preferred frequencies? Something like a polymer one could stretch over fields to get more from the suns rays.

Would you call that a "solar transmitter"?

https://en.wikipedia.org/wiki/Transmitter :

> Generators of radio waves for heating or industrial purposes, such as microwave ovens or diathermy equipment, are not usually called transmitters, even though they often have similar circuits.

Would "absorption spectroscopy" specialists have insight into whether this is possible without solar cells, energy storage, and UV LEDs? https://en.wikipedia.org/wiki/Absorption_spectroscopy

(edit) The thermal energy from sunlight (from the FREE radiation from the nuclear reaction at the center of our solar system) is also useful to and necessary for plants. There's probably a passive heat pipe / solar panel cooling solution that could harvest such heat for colder seasons and climates.

Also, UV-C is useful for sanitizing (UVGI) but not really for plant growth. https://en.wikipedia.org/wiki/Ultraviolet_germicidal_irradia... :

> UVGI can be coupled with a filtration system to sanitize air and water.

Is that necessary or desirable for plants?

https://www.lumigrow.com/learning-center/blogs/the-definitiv... :

> The light that plants predominately use for photosynthesis ranges from 400–700 nm. This range is referred to as Photosynthetically Active Radiation (PAR) and includes red, blue and green wavebands. Photomorphogenesis occurs in a wider range from approximately 260–780 nm and includes UV and far-red radiation.

Photomorphogenesis: https://en.wikipedia.org/wiki/Photomorphogenesis

PAR: Photosynthetically active radiation: https://en.wikipedia.org/wiki/Photosynthetically_active_radi...

Grow light: https://en.wikipedia.org/wiki/Grow_light

Are there bioluminescent e.g. algae which emit PAR and/or UV? Algae can feed off of waste industrial gases.

Bioluminescence > Light production: https://en.wikipedia.org/wiki/Bioluminescence#Light_producti...

Biophoton: https://en.wikipedia.org/wiki/Biophoton

Chemiluminescence: https://en.wikipedia.org/wiki/Chemiluminescence

Electrochemiluminescence: https://en.wikipedia.org/wiki/Electrochemiluminescence

Quantum dot display / "QLED": https://en.wikipedia.org/wiki/Quantum_dot_display

Could be possible? Analyzing the inputs and outputs is useful in natural systems, as well.

[-]

Ask HN: What are your go to SaaS products for startups/MVPs?

lbj | 2020-06-15 05:26:29 | 169 | # | ^

Looking for some inspiration. Ive done a lot of MVPs/Early-stage apps over the years and I tend to lean on the same SaaS portfolio for mails, text gateways, payment etc, but Im sure Ive missed a few valuable additions.

Here's a few I use: Mails: Mailchimp / Mandrill Payment: Paylike Search: Algolia

https://StackShare.io and https://FounderKit.com are great places to find reviews of SaaS services:

> mails,

https://founderkit.com/growth-marketing/email-marketing/revi...

https://stackshare.io/email-marketing

https://zapier.com/learn/email-marketing/

> text gateways,

https://founderkit.com/apis/sms/reviews

https://stackshare.io/voice-and-sms

> payments

https://founderkit.com/apis/credit-card-processing/reviews

https://stackshare.io/payment-services

Both have categories:

https://stackshare.io/categories

https://founderkit.com/reviews

[+]

Long term viability of SaaS solutions is definitely worth researching.

Is this something that's going to get acquired and be extinguished?

What are our switching costs?

How do we get our data in a format that can be: read into our data warehouse/lake and imported into an alternate service if necessary in the future?

How does Coscout compare to e.g. Crunchbase, PitchBook (Morningstar), YCharts, AngelList?

[+]
[-]

Ask HN: Do you read aloud or silently in your minds?

Most times while reading a new topic that I am not familiar with, I tend to read aloud in my mind. Yet that changes based on the content and the way it is written.

When I'm focused, I notice that reading silently helps increase my reading speed and cognition, like everything is flowing in.

Other times I don't seem to understand anything if I'm not reading it aloud in my mind.

Has anyone noticed such a thing and if so can you share any tips or information you've learned about this behavior

[-]

Ask HN: How do you deploy a Django app in 2020?

Hi. I'm a mid-level software engineer trying to deploy a small (2000 max users) django app to production.

If I Google: How to deploy a Django app. I get 10+ different answers.

Can anyone on HN help me please.

[+]

If you have only one production server, dokku is "A docker-powered PaaS that helps you build and manage the lifecycle of applications." Dokku supports Heroku buildpack deployment (buildstep), Procfiles, Dockerfile deployment, Docker image deployment, git deployment (gitreceive), or tarfile deployments. https://github.com/dokku/dokku

There are a number of plugins for Dokku. Dokku ships with the nginx plugin as the HTTP frontend proxy. Dokku supports SSL certs with the certs plugin.

When you need to move to more than one server, what do you do? There's now a dokku-scheduler-kubernetes plugin which can do HA (high availability) which is worth reading about before you develop and document your own deployment workflow. https://github.com/dokku/dokku-scheduler-kubernetes

I also always put build, test, and deployment commands in a Makefile.

Package it; as a container or as containers that install a RPM/DEB/APK/Condapkg/Pythonpkg (possibly containing a zipapp). Zipapps are fast.

If you have any non-python dependencies, a Pythonpkg only solves for part of the packaging needs.

Producing a packaged artifact should be easy and part of your CI build script.

Here's the cookiecutter-django production docker-compose.yml with containers for django, celery, postgres, redis, and traefik as a load balancer: https://github.com/pydanny/cookiecutter-django/blob/master/%...

Cookiecutter-django also includes a Procfile.

With k8s, you have an ingress (~load balancer + SSL termination proxy) other than traefik.

You can generate k8s YML from docker-compose.yml with Kompose.

I just found this which describes using GitLab CI with Helm: https://davidmburke.com/2020/01/24/deploy-django-with-helm-t...

What is the command to scale up or down? Do you need a geodistributed setup (on multiple providers' clouds)? Who has those credentials and experience?

How do you do red/green or rolling deployments?

Can you run tests in a copy of production?

Can you deploy when the tests that run on git commit pass?

What runs the database migrations in production; while users are using the site?

If something deletes the whole production setup or the bus factor is 1, how long does it take to redeploy from zero; and how much manual work does it take?

CI + Ansible + Terraform + Kubernetes.

Whatever tools you settle on, django-eviron for a 12 Factor App may be advisable. https://github.com/joke2k/django-environ

The Twelve-Factor App: https://12factor.net/

[-]

Containers from first principles

[+]

"Docker Without Docker" (2015) explains /sbin/init and systemd-nspawn. Systemd did not exist when docker was first created. https://chimeracoder.github.io/docker-without-docker/

[+]

Are there other systemd + containers solutions?

"Chapter 4. Running containers as Systemd services with Podmam" https://access.redhat.com/documentation/en-us/red_hat_enterp...

AFAIU, when running containers with systemd:

- logs go to journald by default

- there's no docker-compose for just the [name-prefixed] containers in the docker-compose.yml,

- you can use systemd unit template parametrization

- it's not as easy to collect metrics on every container on the system without a read-only docker socket: how many containers are running, how much RAM quota are they assigned and utilizing? What are the filesystem and port mappings?

- you can run containers as non-root

- you can run containers in systemd timer units

- you use runC to handle seccomp

... You can do cgroups and namespaces with just systemd; but keeping chroots/images upgraded is outside the scope of systemd: where is the ideal boundary between systemd and containers?

See this comment regarding per-container MAC MCS labels: https://news.ycombinator.com/item?id=23430959

There's much additional complexity that justifies k8s / OpenShift: when would I want to manage containers with just systemd units?

> Many people might think the word “container” has a specific meaning within the Linux kernel; however the kernel has no notion of a “container”. The word has been synonymous with a variety of Linux tooling which when applied give the resemblance of what we expect a container to be.

Before LXC ( https://LinuxContainers.org ) and CNCF ( https://landscape.cncf.io/ ) and OCI ( https://opencontainers.org/ ), for shared-kernel VPS hosting ("virtual private server"; root on a shared box), there was OpenVZ (which requires a patched kernel and AFAIU still has features, like bursting, not present in cgroups).

Docker no longer has an LXC driver: libcontainer (opencontainers/runc) is the story now. The LXC docs have a great list of utilized kernel features that's also still true for docker-engine = runC + moby. The LXC docs: https://linuxcontainers.org/lxc/introduction/ :

> Current LXC uses the following kernel features to contain processes:

> ## Kernel namespaces (ipc, uts, mount, pid, network and user)

>> Namespaces are a feature of the Linux kernel that partitions kernel resources such that one set of processes sees one set of resources while another set of processes sees a different set of resources. https://en.wikipedia.org/wiki/Linux_namespaces

> ## Apparmor and SELinux profiles https://en.wikipedia.org/wiki/AppArmor / https://en.wikipedia.org/wiki/Security-Enhanced_Linux

udica is an interesting tool for creating SELinux policies for containers.

Is it possible for each container to run confined with a different SELinux label?

> ## Seccomp policies https://en.wikipedia.org/wiki/Seccomp

See below re: Seccomp.

> ## Chroots (using pivot_root) https://en.wikipedia.org/wiki/Chroot

Chroots and symlinks, Chroots and bind mounts, Chroots and overlay filesystems, Chroots and SELinux context labels.

FWIU, Chroots are a native feature of filesystem syscalls in Fuchsia.

> ## Kernel capabilities

https://wiki.archlinux.org/index.php/Capabilities :

>> "Capabilities (POSIX 1003.1e, capabilities(7)) provide fine-grained control over superuser permissions, allowing use of the root user to be avoided. Software developers are encouraged to replace uses of the powerful setuid attribute in a system binary with a more minimal set of capabilities. Many packages make use of capabilities, such as CAP_NET_RAW being used for the ping binary provided by iputils. This enables e.g. ping to be run by a normal user (as with the setuid method), while at the same time limiting the security consequences of a potential vulnerability in ping."

> ## CGroups (control groups)* https://en.wikipedia.org/wiki/Cgroups

Control groups enable per-process (and to thus per-container) resource quotas. Other than limiting the impact of resource exhaustion, cgroups are not a security feature of the Linux kernel.

Here's a helpful explainer of the differences between some of these kernel features; which, combined, have become somewhat ubiquitous:

From "Formally add support for SELinux" (k3s #1372) https://github.com/rancher/k3s/issues/1372#issuecomment-5817... :

> https://blog.openshift.com/securing-kubernetes/*

>> The main thing to understand about SELinux integration with OpenShift is that, by default, OpenShift runs each container as a random uid and is isolated with SELinux MCS labels. The easiest way of thinking about MCS labels is they are a dynamic way of getting SELinux separation without having to create policy files and run restorecon.*

>> If you are wondering why we need SELinux and namespaces at the same time, the way I view it is namespaces provide the nice abstraction but are not designed from a security first perspective. SELinux is the brick wall that’s going to stop you if you manage to break out of (accidentally or on purpose) from the namespace abstraction.

>> CGroups is the remaining piece of the puzzle. Its primary purpose isn’t security, but I list it because it regulates that different containers stay within their allotted space for compute resources (cpu, memory, I/O). So without cgroups, you can’t be confident your application won’t be stomped on by another application on the same node.

From Wikipedia: https://en.wikipedia.org/wiki/Seccomp ::

> seccomp (short for secure computing mode) is a computer security facility in the Linux kernel. seccomp allows a process to make a one-way transition into a "secure" state where it cannot make any system calls except exit(), sigreturn(), read() and write() to already-open file descriptors. Should it attempt any other system calls, the kernel will terminate the process with SIGKILL or SIGSYS.[1][2] In this sense, it does not virtualize the system's resources but isolates the process from them entirely.

... SELinux is one implementation of MAC (Mandatory Access Controls) that is built upon the LSM (Linux Security Modules) support in the Linux kernel. Some distros include policy sets for Docker hosts and lots of other packages that could be installed; see: "Formally add support for SELinux" (k3s #1372) https://github.com/rancher/k3s/issues/1372#issuecomment-5817...

[-]

How many people did it take to build the Great Pyramid?

> The potential energy of the pyramid—the energy needed to lift the mass above ground level—is simply the product of acceleration due to gravity, mass, and the center of mass, which in a pyramid is one-quarter of its height. The mass cannot be pinpointed because it depends on the specific densities of the Tura limestone and mortar that were used to build the structure; I am assuming a mean of 2.6 metric tons per cubic meter, hence a total mass of about 6.75 million metric tons. That means the pyramid’s potential energy is about 2.4 trillion joules.

In "Lost Technologies of the Great Pyramid" (2010) and "The Great Pyramid Prosperity Machine: Why the Great Pyramid was Built!" (2011), Steven Myers contends that the people who built the pyramids were master hydrologists who built a series of locks from the Nile all the way up the sides of the pyramids and pumped water up to a pool of water on the topmost level; where they used buoyancy and mechanical leverage by way of a floating barge crane in order to place blocks. This would explain how and why the pyramids are water tight, why explosive residue has been found in specific chambers, and why boats have been found buried at the bases of the pyramids.

https://www.amazon.com/dp/B0045Y26CC/

There are videos: http://www.thepump.org/video-series-2

https://www.youtube.com/playlist?list=PLt_DvKGJ_QLYvJ3IdVKXU...

I'm not aware of other explanations for how friction could have been overcome in setting the blocks such that they are watertight (in the later Egyptian pyramids).

AFAIU, the pyramids of South America appear to be of different - possibly older - construction methods.

[-]

Solar’s Future is Insanely Cheap

[+]
[+]

> if smart thermostats received price signals (maybe we should precool this house...) that would alleviate the evening ramp-up issue.

Is there an existing model for retail intraday rates? Would intraday rates be desirable for all market participants?

"Add area for curtailment data?" https://github.com/tmrowco/electricitymap-contrib/issues/236...

[-]

Demo of an OpenAI language model applied to code generation [video]

[+]
[+]
[+]
[+]
[+]

1. Generate test cases from function/class/method definitions.

2. Generate test cases from fuzz results.

3. Run tests and walk outward from symbols around relevant stacktrace frames (line numbers,).

4. Mutate and run the test again.

...

Model-based Testing (MBT) https://en.wikipedia.org/wiki/Model-based_testing

> Models can also be constructed from completed systems

> At best this is like having an exceptionally smart autocomplete function that can look up code snippets on SO for you (provided those code snippets are no longer than one line).

Yeah, all it could do for you is autocomplete around what it thinks the specification might be at that point in time.

> But what if Andy gets another dinosaur, a mean one? -- Toy Story (1995)

[-]

Future of the human climate niche

How many degrees Celsius hotter would that be for billions of people in 50 years?

> The Paris Agreement's long-term temperature goal is to keep the increase in global average temperature to well below 2 °C above pre-industrial levels; and to pursue efforts to limit the increase to 1.5 °C, recognizing that this would substantially reduce the risks and impacts of climate change. This should be done by reducing emissions as soon as possible, in order to "achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases" in the second half of the 21st century. It also aims to increase the ability of parties to adapt to the adverse impacts of climate change, and make "finance flows consistent with a pathway towards low greenhouse gas emissions and climate-resilient development."

> Under the Paris Agreement, each country must determine, plan, and regularly report on the contribution that it undertakes to mitigate global warming. [6] No mechanism forces [7] a country to set a specific emissions target by a specific date, [8] but each target should go beyond previously set targets.

And then this is what was decided:

> In June 2017, U.S. President Donald Trump announced his intention to withdraw the United States from the agreement. Under the agreement, the earliest effective date of withdrawal for the U.S. is November 2020, shortly before the end of President Trump's 2016 term. In practice, changes in United States policy that are contrary to the Paris Agreement have already been put in place.[9]

https://en.wikipedia.org/wiki/Paris_Agreement

[+]

That's a good question,

[-]

Ask HN: Best resources for non-technical founders to understand hacker mindset?

Background: technical founder wondering what reading to recommend to a business focused founder for them to grok the hacker mindset. I've thought perhaps Mythical Man Month and How To Become A Hacker (Eric Raymond essay) but not sure they're quite right.

Any suggestions?

(In case it helps an analogue in the mathematical world might be A Mathematician's Apology or Gödel, Escher, Bach.)

[+]

> 3) True "hackers" value taking ownership in their work, that is, whatever they work on becomes an extension of themselves, much like an artist working on a work of art

There's something to be said about owning your work, but I have to disagree that unhealthy attachment to work products is a universal attribute of technical founder hackers. It's not a kid, it's a thing that was supposed to be the best use of the resources and information available at the time.

I must have confused this point with vanity and retention in projecting my own counterproductive anti-patterns.

Prolific is not the objective for a true hacker, but not me but a guy I know mentioned something about starting projects and seeing the next 5 years of potentially happily working on that project, too.

[-]

Dissecting the code responsible for the Bitcoin halving

> The difficulty of the calculations are determined by how many zeroes need to be at the front. [...]

The difficulty is actually not determined by the number of zeroes (as was initially the case).

https://en.bitcoinwiki.org/wiki/Difficulty_in_Mining :

> The Bitcoin network has a global block difficulty. Valid blocks must have a hash below this target. Mining pools also have a pool-specific share difficulty setting a lower limit for shares.

"Less than" instead of "count leading zeroes" makes it possible for the difficulty to be less broadly adjusted in a difficulty retargeting.

Difficulty retargetings occur after up to 2016 blocks (~10 minutes, assuming the mining pool doesn't suddenly disappear resulting in longer block times that could make it take months to get to 2016 blocks according to "What would happen if 90% of the Bitcoin miners suddenly stopped mining?" https://bitcoin.stackexchange.com/questions/22308/what-would... )

Difficulty is adjusted up or down (every up to 2016 blocks) in order to keep the block time to ~10 minutes.

The block reward halving occurs every ~4 years (210,000 blocks).

Relatedly, Moore's law observes/predicts that processing power (as measured by transistor count per unit) will double every 2 years while price stays the same. Is energy efficiency independent of transistor count? https://en.wikipedia.org/wiki/Moore%27s_law

Ask HN: Does mounting servers parallel with the temperature gradient trap heat?

Heat rises. Is heat trapped in the rack? Would mounting servers sideways (vertically) allow heat to transfer out of the rack?

Many systems have taken the vertical mount approach approach over the years: Blade servers, routers, modems, and various gaming systems.

Horizontally-mounted: parallel with the floor

Vertically-mounted: perpendicular to the floor

[+]

Thermodynamics https://en.wikipedia.org/wiki/Thermodynamics

Are engine cylinders ever mounted horizontally? Why or why not?

> Heat rises.

Warmer air is less dense / more buoyant; so it floats.

"Does hot air really rise?" https://physics.stackexchange.com/questions/6329/does-hot-ai...

- Water ice floats because – somewhat uniquely – solid water is less dense than liquid water.

> Is heat trapped in the rack?

Probably.

> Would mounting servers sideways (vertically) allow heat to transfer out of the rack?

How could we find studies that have already tested this hypothesis?

[-]

Google ditched tipping feature for donating money to sites

> When asked, Google confirmed that the designs were an internal idea it explored last year but decided not to pursue as part of [Google Contributor] and Google Funding Choices, which lets sites ask visitors to disable ad blockers, or instead buy a subscription or pay a per page fee to remove ads.

Could this be built on Web Monetization API (ILP (Interledger Protocol)) and e.g. Google Pay as one of many possible payment/card/cryptocurrency processing backends; just like Coil is built on Web Monetization API?

[-]

Innovating on Web Monetization: Coil and Firefox Reality

Coil: $5/mo, Content creators get a proportional cut of that amount according to what is browsed with the browser extension enabled or the Puma browser, and Private: No Tracking

> Coil sends payments via the Interledger Protocol, which allows any currency to be used for sending and receiving.

https://github.com/coilhq

It looks like the Web Monetization API is not yet listed on the Website Monetization Wikipedia page: https://en.wikipedia.org/wiki/Website_monetization

Quoting from earlier this week:

> Web Monetization API (ILP: Interledger Protocol)

>> A JavaScript browser API which allows the creation of a payment stream from the user agent to the website.

>> Web Monetization is being proposed as a #W3C standard at the Web Platform Incubator Community Group.

> https://webmonetization.org/

> Interledger: Web Monetization API https://interledger.org/rfcs/0028-web-monetization/

Ask HN: Recommendations for online essay grading systems?

Which automated essay grading systems would you recommend? Are they open source?

How can we identify biases in these objective systems?

What are your experiences with these systems as authors and graders?

Who else remembers using the Flesch-Kincaid Grade Level metric in Word to evaluate school essays? https://en.wikipedia.org/wiki/Flesch%E2%80%93Kincaid_readabi...

Imagine my surprise when I learned that this metric is not one that was created for authors to maximize: reading ease for the widest audience is not an objective in some deparments, but a requirement.

What metrics do and should online essay grading systems present? As continuous feedback to authors, or as final judgement?

I'm reminded of a time in highschool when an essay that I wrote was flagged by an automated essay verification engine as plagiarism. I certainly hadn't plagiarized, and it was up to me to refute that each identified keyword-similar internet resource on the internet was not an uncited source of my paper. I disengaged. I later wrote an essay about how keyword search tools could be helpful to students doing original research. True story.

Decades later, I would guess that human review is still advisable.

This need of mine to have others validate my unpaid work has nothing to do with that traumatic experience.

I still harbor this belief in myself: that what I have to say is worth money to others, and that - someday - I'll pay a journal to consider my ScholarlyArticle for publishing in their prestigious publication with maybe even threaded peer review (and #StructuredPremises linking to Datasets and CreativeWorks that my #LinkedMetaAnalyses are predicated upon). Someday, I'll develop an online persona as a scholar, as a teacher, maybe someday as a TA or an associate professor and connect my CV to any or all of the social networks for academics. I'll work to minimize the costs of interviewing and searching public records. My research will be valued and funded.

Or maybe, like 20% time, I'll find time and money on the side for such worthwhile investigations; and what I produce will be of value to others: more than just an exercise in hearing myself speak.

In my years of internet communications, I've encountered quite a few patrons; lurkers; participants; and ne'er-do-wells who'll order 5 free waters, plaster their posters to the walls, harass paying customers, and just walk out like nothing's going to happen. Moderation costs time and money; and it's a dirty job that sometimes pays okay. There are various systems for grading these comments, these essays, these NewsArticles, these ScholarlyArticles. Human review is still advisable.

> How can we identify biases in these objective systems?

Modern "journalism" recognizes that it's not a one-way monologue but a dialogue: people want to comment. Ignorantly, helpfully, relevantly, insightfully, experiencedly. What separates the "article part" from the "comments part" of the dialogue? Typesetting, CSS, citations, quality of argumentation?

You could call it something like "Because I Want You To Grade My Essay Again" (BIWYGMEA) and just pay people who submit to it.

Ask HN: Systems for supporting Evidence-Based Policy?

What tools and services would you recommend for evidence-based policy tasks like meta-analysis, solution criteria development, and planned evaluations according to the given criteria?

Are they open source? Do they work with linked open data?

> Ask HN: Systems for supporting Evidence-Based Policy?

> What tools and services would you recommend for evidence-based policy tasks like meta-analysis, solution criteria development, and planned evaluations according to the given criteria?

> Are they open source? Do they work with linked open data?

I suppose I should clarify that citizens, consumers, voters, and journalists are not acceptable answers

[-]

Facebook, Google to be forced to share ad revenue with Australian media

[+]
[+]
[+]

If you don't want them to index your content and send you free traffic, you can already specify that in your robots.txt; for free. https://en.wikipedia.org/wiki/Robots_exclusion_standard

There are no ads on Google News.

There is an apparent glut of online news: supply exceeds demand and so the price has fallen.

[+]

By hurt, do you mean competed with by effectively utilizing technology to help people find information about the world from multiple sources.

There are very many news aggregators and most do serve ads next to the headlines they index. I assume that people typically link out from news aggregation sites more than into vertically-integrated services.

Perhaps the content producers / information service providers could develop additional revenue streams in order to subsidize a news aggregation public service. Micropayments (BAT, Web Monetization (ILP)), ads, paywalls, and public and private grants are sources of revenue for content producers.

I think it's disingenuous to blame news aggregation sites for the unprofitability of extremely redundant journalism. What happened to journalism? Internet. Excessive ads. Aren't we all writers these days.

Unfortunately they killed the "most cited" and was it "most in-depth" source analysis functions of Google News; and now we're stuck with regurgitated news wires and press releases and all of these eyewitness mobile phone videos with two-bit banal commentary and also punditry. How the world has changed.

So, as far as scientific experiments are concerned, it might be interesting to see what the impact of de-listing from free time sites X, Y, and Z is.

Do the papers in Australia and France now intend to compensate journal ScholarlyArticle authors whose work they summarize and hopefully at least cite the titles and URLs of, or the journals themselves?

[-]

France rules Google must pay news firms for content

us0r | 2020-04-11 12:36:55 | 134 | # | ^

Website monetization https://en.wikipedia.org/wiki/Website_monetization

Web Monetization API (ILP: Interledger Protocol)

> A JavaScript browser API which allows the creation of a payment stream from the user agent to the website.

> Web Monetization is being proposed as a #W3C standard at the Web Platform Incubator Community Group.

https://webmonetization.org/

Interledger: Web Monetization API https://interledger.org/rfcs/0028-web-monetization/

Khan Academy, for example, accepts BAT (Basic Attention Token) micropayments/microdonations that e.g. Brave browser users can opt to share with the content producers and indexers. https://en.wikipedia.org/wiki/Brave_(web_browser)#Basic_Atte...

Web Monetization w/ Interledger should enable any payments system with low enough transaction costs ("ledger-agnostic, currency agnostic") to be used to pay/tip/donate to content producers who are producing unsensational, unbiased content that people want to pay for.

Paywalls/subscriptions and ads are two other approaches to funding quality journalism.

Should journalists pay ScholarlyArticle authors whose studies they publish summaries of without even citing the DOI/URL and Title; or the journals said ScholarlyArticles are published in? https://schema.org/ScholarlyArticle

[-]

Adafruit Thermal Camera Imager for Fever Screening

> Thermal Camera Imager for Fever Screening with USB Video Output - UTi165K. PRODUCT ID: 4579 https://www.adafruit.com/product/4579

> This video camera takes photos of temperatures! This camera is specifically tuned to work in the 30˚C~45˚C / 86˚F~113˚ F range with 0.5˚C / 1˚ F accuracy, so it's excellent for human temperature & fever detection. In fact, this thermal camera is often used by companies/airports/hotels/malls to do a first-pass fever check: If any person has a temperature of over 99˚F an alarm goes off so you can do a secondary check with an accurate handheld temperature meter.

> You may have seen thermal 'FLIR' cameras used to find air leaks in homes, but those cameras have a very wide temperature range, so they're not as accurate in the narrow range used for fever-scanning. This camera is designed specifically for that purpose!

... USB Type-C, SD Card; no price listed yet?

[-]

Ask HN: What's the ROI of Y Combinator investments?

To calculate the ROI of YC investments, we could find the terms of the YC investments (x for y%, preference) and find the exit rate (what % of companies exit).

We could search for 'ROI of ycombinator investments' and find valuation numbers from a number of years ago.

From the first page of search results, we'd then learn about "return on capital" and how the standard YC seed terms have changed over the years.

Return on capital: https://en.wikipedia.org/wiki/Return_on_capital

From the See also section of this Wikipedia page, we might discover "Cash flow return on investment" and "Rate of return on a portfolio"

From the "rate of return" Wikipedia page, we might learn that "The return on investment (ROI) is return per dollar invested. It is a measure of investment performance, as opposed to size (c.f. return on equity, return on assets, return on capital employed)." and that "The annualized return of an investment depends on whether or not the return, including interest and dividends, from one period is reinvested in the next period. " https://en.wikipedia.org/wiki/Rate_of_return

From the YCombinator Wikipedia page, we might read that "The combined valuation of the top YC companies was over $155 billion as of October, 2019. [4]" and that "As of late 2019, Y Combinator had invested in >2,000 companies [37], most of which are for-profit. Non-profit organizations can also participate in the main YC program. [38]" and then read about "seed accelerators" and then "business incubators" in search of appropriate metrics for comparing VC performance. https://en.wikipedia.org/wiki/Y_Combinator

ROI is such a frou frou statistic anyway. What does that even mean, ROI? In any case, YC itself is not a public company, per se, AFAICT, so, it's not so easy as going to https://YCharts.com, entering the equity symbol, clicking on "Key Stats", and scrolling down to "Profitability" to review [Gross | EBITDA | Operating] [Profit] Margin.

The LTSE (Long-Term Stock Exchange) is where people who are in this for real are really doing it now.

[-]

Microsoft announces Money in Excel powered by Plaid

This looks really useful.

At first glance, I found a number of ways to push transaction data into Google Sheets from the Plaid API:

build-your-own-mint (NodeJS, CircleCI) https://github.com/yyx990803/build-your-own-mint

go-plaid: https://github.com/ebcrowder/go-plaid

Presumably, like the GOOGLEFINANCE function, there's some way to pull data from an API with just Apps Script (~JS) without an auxiliary serverless function to get the txs from Plaid and post to the gsheets API?

[-]

Lora-based device-to-device smartphone communication for crisis scenarios [pdf]

[+]
[+]

Unfortunately, the Earl tablet never made it to market: https://blog.the-ebook-reader.com/2015/01/26/video-update-ab...

Earl specs: Waterproof; Solar charging; eInk; ANT+; NFC; VHF/UHF transceiver (GMRS, PMR446, UHFCB); GPS; Sensors: Accelerometer, Gyroscope, Magnetometer, Temperature, Barometer, Humidity; AM/FM/SW/LW/WB

LTE, LoRa, 5G, and Hostapd would be great

Being able to plug it into a powerbank and antennas for use as a fixed or portable e.g. BATMAN mesh relay would be great

"LoRa+WiFi ClusterDuck Protocol by Project OWL for Disaster Relief" https://news.ycombinator.com/item?id=22707267

> An opkg (for e.g. OpenWRT) with this mesh software would make it possible to use WiFi/LTE routers with a LoRa transmitter/receiver connected over e.g. USB or Mini-PCIe.

LoRa+WiFi ClusterDuck Protocol by Project OWL for Disaster Relief

> Project OWL (Organization, Whereabouts, and Logistics) creates a mesh network of Internet of Things (IoT) devices called DuckLinks. These Wi-Fi-enabled devices can be deployed or activated in disaster areas to quickly re-establish connectivity and improve communication between first responders and civilians in need.

> In OWL, a central portal connects to solar- and battery-powered, water-resistant DuckLinks. These create a Local Area Network (LAN). In turn, these power up a Wi-Fi captive portal using low-frequency Long-range Radio (LoRa) for Internet connectivity. LoRA has a greater range, about 10km, than cellular networks.

...

> You don't actually need a DuckLink device. The open-source OWL firmware can quickly turn a cheap wireless device into a DuckLink using the -- I swear I'm not making this up -- ClusterDuck Protocol. This is a mesh network node, which can hook up to any other near-by Ducks.

> OWL is more than just hardware and firmware. It's also a cloud-based analytic program. The OWL Data Management Software can be used to facilitate organization, whereabouts, and logistics for disaster response.

Homepage: http://clusterduckprotocol.org/

GitHub: https://github.com/Code-and-Response/ClusterDuck-Protocol

The Linux Foundation > Code and Response https://www.linuxfoundation.org/projects/code-and-response/

GitHub: https://github.com/code-and-response

An opkg (for e.g. OpenWRT) with this mesh software would make it possible to use WiFi/LTE routers with a LoRa transmitter/receiver connected over e.g. USB or Mini-PCIe.

... cc'ing from https://twitter.com/westurner/status/1238859774567026688 :

OpenWRT is a Make-based embedded Linux distro w/ LuCI (Lua + JSON + UCI) web interface).

#OpenWRT runs on RaspberryPis, ARM, x86, ARM, MIPS; there's a Docker image. OpenWRT Supported Devices: https://openwrt.org/supported_devices

OpenWRT uses opkg packages: https://openwrt.org/docs/guide-user/additional-software/opkg

I searched for "Lora" in OpenWRT/packages: lora-gateway-hal opkg package: https://github.com/openwrt/packages/blob/master/net/lora-gat...

lora-packet-forwarder opkg package (w/ UCI integration): https://github.com/openwrt/packages/pull/8320

https://github.com/xueliu/lora-feed :

> Semtech packages and ChirpStack [(LoRaserver)] Network Server stack for OpenWRT

> > [In addition to providing node2node/2net connectivity, #batman-adv can bridge VLANs over a mesh (or link), such as for “trusted” client, guest, IoT, and mgmt networks. It provides an easy-to-configure alternative to other approaches to “backhaul”, […]] https://openwrt.org/docs/guide-user/network/wifi/mesh/batman

> I have a few different [quad-core, MIMO] ARM devices without 4G. TIL that the @GLiNetWifi devices ship with OpenWRT firmware (and a mobile config app) and some have 1-2 (Mini-PCIe) 4G w/ SIM slots. Also, @turris_cz has OpenWRT w/ LXC in the kernel build. https://t.co/Rz0Uu5uHJQ

[-]

A Visual Debugger for Jupyter

[+]

So, I went looking for the answer to this because in the past I've installed the scratchpad extension by installing jupyter_contrib_nbextensions, but those don't work with JupyterLab because there's a new extension model for JupyterLab that requires node and npm.

Turns out that with JupyterLab, all you have to to is right-click and select "New Console for Notebook" and it opens a console pane below the notebook already attached to the notebook kernel. You can also instead do File > New > Console and select a kernel listed under "Use Kernel From Other Session".

The "New action runInConsole to allow line by line execution of cell content" "PR adds a notebook command `notebook:run-in-console`" but you have to add the associated keyboard shortcut to your config yourself; e.g. `Ctrl Shift Enter` or `Ctrl-G` that calls `notebook:run-in-console`. https://github.com/jupyterlab/jupyterlab/pull/4330

"In Jupyter Lab, execute editor code in Python console" describes how to add the associated keyboard shortcut to your config: https://stackoverflow.com/questions/38648286/in-jupyter-lab-...

[-]

Ask HN: What's the Equivalent of 'Hello, World' for a Quantum Computer?

The 'Hello,World' program is one of the simplest programs to demonstrate how to go about writing a program in a new programming language.

What is an equivalent simple program which demonstrates how to write a very simple program for a quantum computer?

I have tried (and failed) to imagine such a program. Can somebody who has actually used a quantum computer show us an actual quantum computer program?

Ask HN: Communication platforms for intermittent disaster relief?

Are there good platforms for disaster relief that work well with intermittent connectivity (i.e. spotty 3G/4G/WiFi/LoRa)?

How can major networks improve in terms of e.g. indicating message delivery status, most recent sync time, sync throttling status due to load, optionally downloading images/audio/video, referring people to local places and/or forms for help with basic needs, etc?

What are some tools that app developers can use to simulate intermittent connectivity when running tests?

How can people find local, legitimate sources for information if they're not already following local disaster relief authorities?

DroneAid: A Symbol Language and ML model for indicating needs to drones, planes

From the README https://github.com/Code-and-Response/DroneAid :

> The DroneAid Symbol Language provides a way for those affected by natural disasters to express their needs and make them visible to drones, planes, and satellites when traditional communications are not available.

> Victims can use a pre-packaged symbol kit that has been manufactured and distributed to them, or recreate the symbols manually with whatever materials they have available.

> These symbols include those below, which represent a subset of the icons provided by The United Nations Office for the Coordination of Humanitarian Affairs (OCHA). These can be complemented with numbers to quantify need, such as the number or people who need water.

Each of the symbols are drawn within a triangle pointing up:

- Immediate Help Needed (orange; downward triangle \n SOS),

- Shelter Needed (cyan; like a guy standing in a tall pentagon without a floor),

- OK: No Help Needed (green; upward triangle \n OK),

- First Aid Kit Needed (yellow; briefcase with a first aid cross),

- Water Needed (blue; rain droplet), Area with Children in Need (lilac; baby looking thing with a diaper on),

- Food Needed (red; pan with wheat drawn above it),

- Area with Elderly in Need (purple; person with a cane)

So, we're going to need some artists; something to write large things with; some orange, cyan, green, yellow, blue, lilac, red, and purple things; some people who can tell me the difference between lilac (light purple: babies) and purple (darker purple: old people); and some drones that can capture location and imagery.

Note that DroneAid is also a project of The Linux Foundation Code and Response organization.

[-]

Ask HN: Computer Science/History Books?

Hi guys, can you recommend interesting books on Computer Science or computer history (similar to Dealers of Lightning) to read on this quarantine times? I really like that subject and am looking for something to keep myself away from TV at night.

Thank you.

[+]

"The Information: A History, a Theory, a Flood" starts with "1 | Drums That Talk" re: African drum messaging; a complex coding scheme:

> Here was a messaging system that outpaced the best couriers, the fastest horses on good roads with way stations and relays.

https://en.wikipedia.org/wiki/The_Information:_A_History,_a_...

https://www.goodreads.com/book/show/8701960-the-information

From "Polynesian People Used Binary Numbers 600 Years Ago" https://www.scientificamerican.com/article/polynesian-people... :

> Binary arithmetic, the basis of all virtually digital computation today, is usually said to have been invented at the start of the eighteenth century by the German mathematician Gottfried Leibniz. But a study now shows that a kind of binary system was already in use 300 years earlier among the people of the tiny Pacific island of Mangareva in French Polynesia.

[-]

Open-source security tools for cloud and container applications

[+]

> List of CNCF open source security projects without the blog post: https://landscape.cncf.io/category=security-compliance&forma...

Thanks for this.

[-]

YC Companies Responding to Covid-19

Are life sciences and healthcare familiar verticals for YC?

Good to see money and talent going to such good use.

(Edit) Here's the YC Companies list; which doesn't yet list these new investments:

Biomedical vertical: https://www.ycombinator.com/companies/?vertical=Biomedical

Healthcare vertical: https://www.ycombinator.com/companies/?vertical=Healthcare

"The Y Combinator Database" https://www.ycdb.co/

[-]

Show HN: Neh – Execute any script or program from Nginx location directives

[+]
[+]
[+]

Nginx probably somewhat-deliberately has FastCGI but not regular CGI for a number of reasons.

CGI has process-per-request overhead.

CGI typically runs processes as the user the webserver is running as; said processes can generally read and write to the unsandboxed address space of the calling process (such as x.509 private certs).

Just about any app can be (D)DOS'd. That requires less resources with the process-per-request overhead of CGI.

In order to prevent resource exhaustion due to e.g someone benignly hitting reload a bunch of times and thus creating multiple GET requests, applications should enqueue task messages which a limited number of workers retrieve from a (durable) FIFO or priority queue and update the status of.

Websockets may or may not scale better than long-polling for streaming stdout to a client.

[+]
[-]

Ask HN: How can a intermediate-beginner learn Unix/Linux and programming?

For a long time, I’ve been in an awkward position with my knowledge of computers. I know basic JavaScript (syntax and booleans and nothing more). I’ve learned the bare basics of Linux from setting up a Pi-hole. I understand the concept of secure hashes. I even know some regex.

The problem is, I know so little that I can’t actually do anything with this knowledge. I suppose I’m looking for a tutorial that will teach me to be comfortable with the command line and a Unix environment, while also teaching me to code a language. Where should I start?

[+]

> Also check github for a bunch of repos that contain biolerplate code that is used in most deamons illustrating signal handling, forking, etc.[2]

docker-systemctl-replacement is a (partial) reimplementation of systemd as one python script that can be run as the init process of a container that's helpful for understanding how systemd handles processes: https://github.com/gdraheim/docker-systemctl-replacement/blo...

systemd is written in C: https://github.com/systemd/systemd

> examples of how to write secure code

awesome-safety-critical > Coding Guidelines https://github.com/stanislaw/awesome-safety-critical/blob/ma...

[-]

Math Symbols Explained with Python

Average of a finite series: There's a statistics module in Python 3.4+:

  X = [1, 2, 3]

  from statistics import mean, fmean
  mean(X)

  # may or may not be preferable to
  sum(X) / len(X)
https://docs.python.org/3/library/statistics.html#statistics...

Product of a terminating iterable:

  import operator
  from functools import reduce
  # from itertools import accumulate
  reduce(operator.mul, X)
Vector norm:

  from numpy import linalg as LA
  LA.norm(X)
https://docs.scipy.org/doc/numpy/reference/generated/numpy.l...

Function domains and ranges can be specified and checked at compile-time with type annotations or at runtime with type()/isinstance() or with something like pycontracts or icontracts for checking preconditions and postconditions.

Dot product:

  Y = [4, 5, 6]
  np.dot(X, Y)
https://docs.scipy.org/doc/numpy/reference/generated/numpy.d...

Unit vector:

  X / np.linalg.norm(X)

[-]

Ask HN: Is there way you can covert smartphone to a no contact thermometer?

Wondering is there an infrared dongle that can convert your phone to a no contact thermometer to read body temperature?

Infrared thermometer: https://en.wikipedia.org/wiki/Infrared_thermometer

Thermography: https://en.wikipedia.org/wiki/Thermography

IDK what the standard error is for medical temperature estimation with an e.g. FLIR ONE thermal imaging camera for an Android/iOS device. https://www.flir.com/applications/home-outdoor/

I'd imagine that sanitization would be crucial for any clinical setting.

(Edit) "Prediction of brain tissue temperature using near-infrared spectroscopy" (2017) Neurophotonics https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5469395/

"Nirs body temperature" https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=nir...

"Infrared body temperature" https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=inf...

"Infrared thermometer iOS" https://m.alibaba.com/trade/search?SearchText=infrared%20the...

"Infrared thermometer Android" https://alibaba.com/trade/search?SearchText=infrared%20therm...

[-]

Employee Scheduling

From "Ask HN: What algorithms should I research to code a conference scheduling app" https://news.ycombinator.com/item?id=15267804 :

> Resource scheduling, CSP (Constraint Satisfaction programming)

CSP: https://en.wikipedia.org/wiki/Constraint_satisfaction_proble...

Scheduling (production processes):

https://en.wikipedia.org/wiki/Scheduling_(production_process...

Scheduling (computing):

https://en.wikipedia.org/wiki/Scheduling_(computing)

... To an OS, a process thread has a priority and sometimes a CPU affinity.

From http://markmail.org/search/?q=list%3Aorg.python.omaha+pysche... :

Pyschedule:

- Src: https://github.com/timnon/pyschedule

From https://github.com/timnon/pyschedule :

> pyschedule is python package to compute resource-constrained task schedules. Some features are:

- precedence relations: e.g. task A should be done before task B

- resource requirements: e.g. task A can be done by resource X or Y

- resource capacities: e.g. resource X can only process a few tasks

Previous use-cases include:

- school timetables: assign teachers to classes

- beer brewing: assign equipment to brewing stages

- sport schedules: assign stadiums to games

... https://en.wikipedia.org/wiki/Slurm_Workload_Manager :

> Slurm is the workload manager on about 60% of the TOP500 supercomputers.[1]

Slurm uses a best fit algorithm based on Hilbert curve scheduling or fat tree network topology in order to optimize locality of task assignments on parallel computers.[2]

... https://en.wikipedia.org/wiki/Hilbert_curve_scheduling :

> [...] the Hilbert curve scheduling method turns a multidimensional task allocation problem into a one-dimensional space filling problem using Hilbert curves, assigning related tasks to locations with higher levels of proximity.[1] Other space filling curves may also be used in various computing applications for similar purposes.[2]

[-]

Show HN: Simulation-based high school physics course notes

[+]
[-]

WebAssembly brings extensibility to network proxies

FWIW, Ethereum WASM (ewasm) has a cost (in "particles" ("gas")) for each WebAssembly opcode. [1]

Opcode costs help to incentivize efficient code.

ewasm/design /README.md [2] links to the complete WebAssembly instruction set. [3]

[1] https://ewasm.readthedocs.io/en/mkdocs/determining_wasm_gas_...

[2] https://github.com/ewasm/design/blob/master/README.md

[3] https://webassembly.github.io/spec/core/appendix/index-instr...

[-]

Pandemic Ventilator Project

mhb | 2020-03-14 00:29:09 | 318 | # | ^
[+]

> https://www.projectopenair.org/

From https://app.jogl.io/project/121#about

>> Current Status of the project

>> The main bottleneck currently (2020-03-13) is organization / management.

>> […] This is an organization of experts and hobbyists from around the globe.

[+]

I see a "Ventilator Project" heading?

(edit) here's the link to their 'Ventilator' document: https://docs.google.com/document/d/1RDihfZIOEYs60kPEIVDe7gms...

[-]

Low-cost ventilator wins Sloan health care prize (2019)

[+]

Ventilator availability is limiting our ability to get care to the most people we can.

[+]

Robots, then. Robots are the future!

[-]

AI can detect coronavirus from CT scans in twenty seconds

Is it possible to detect coronavirus with NIRS (Near-Infrared Spectroscopy)? https://en.wikipedia.org/wiki/Near-infrared_spectroscopy

FWIU, the equipment costs and scan times are lower with NIRS than with CT or MRI? And infrared is zero rads?

(Edit) I think it was this or the TED video that had the sweet demo: "The Science of Visible Thought & Our Translucent Selves | Mary Lou Jepsen | SU Global Summit" https://youtu.be/IRCXNBzfeC4

Are these devices in production?

[-]

AutoML-Zero: Evolving machine learning algorithms from scratch

[+]
[+]

> Would be funny but most of those things are already on AutoML Tables, including the carbon offset

GCP datacenters are 100% offset with PPAs. Are you referring to different functionality for costing AutoML instructions in terms of carbon?

...

I'd add:

- Setup a Jupyter Notebook environment

> Jupyter Notebooks are one of the most popular development tools for data scientists. They enable you to create interactive, shareable notebooks with code snippets and markdown for explanations. Without leaving Google Cloud's hosted notebook environment, AI Platform Notebooks, you can leverage the power of AutoML technology.

> There are several benefits of using AutoML technology from a notebook. Each step and setting can be codified so that it runs the same every time by everyone. Also, it's common, even with AutoML, to need to manipulate the source data before training the model with it. By using a notebook, you can use common tools like pandas and numpy to preprocess the data in the same workflow. Finally, you have the option of creating a model with another framework, and ensemble that together with the AutoML model, for potentially better results.

https://cloud.google.com/blog/products/ai-machine-learning/u...

[+]

> This sounds like the sort of thing that would be useful outside of data science.

The instruction/operation costing or the computational essay/notebook environment setup?

Ethereum ("gas") and EOS have per-instruction costing. SingularityNET is a marketplace for AI solutions hosted on a blockchain, where you pay for AI/ML services with the SingularityNET AGI token. E.g. GridCoin and CureCoin compensate compute resource donations with their own tokens; which also have a floating exchange rate.

TLJH: "The Littlest JupyterHub" describes how to setup multi-user JupyterHub with e.g. Docker spawners that isolate workloads running with shared resources like GPUs and TPUs: http://tljh.jupyter.org/en/latest/

"Zero to BinderHub" describes how to setup BinderHub on a k8s cluster: https://binderhub.readthedocs.io/en/latest/zero-to-binderhub...

[+]

REES is one solution to reproducibility of the computational environment.

> BinderHub ( https://mybinder.org/ ) creates docker containers from {git repos, Zenodo, FigShare,} and launches them in free cloud instances also running JupyterLab by building containers with repo2docker (with REES (Reproducible Execution Environment Specification)). This means that all I have to do is add an environment.yml to my git repo in order to get Binder support so that people can just click on the badge in the README to launch JupyterLab with all of the dependencies installed.

> REES supports a number of dependency specifications: requirements.txt, Pipfile.lock, environment.yml, aptSources, postBuild. With an environment.yml, I can install the necessary CPython/PyPy version and everything else.

REES: https://repo2docker.readthedocs.io/en/latest/specification.h...

REES configuration files: https://repo2docker.readthedocs.io/en/latest/config_files.ht...

Storing a container built with repo2docker in a container registry is one way to increase the likelihood that it'll be possible to run the same analysis pipeline with the same data and get the same results years later.

...

Pachyderm ( https://pachyderm.io/platform/ ) does Data Versioning, Data Pipelines (with commands that each run in a container), and Data Lineage (~ "data provenance"). What other platforms are there for versioning data and recording data provenance?

...

Recording manual procedures is an area where we've somewhat departed from the "write in a lab notebook with a pen" practice. CoCalc records all (collaborative) inputs to the notebook with a timeslider for review.

In practice, people use notebooks for displaying generated charts, manual exploratory analyses (which does introduce bias), for demonstrating APIs, and for teaching.

Is JupyterLab an ideal IDE? Nope, not by a longshot. nbdev makes it easier to write a function in a notebook, sync it to a module, edit it with a more complete data-science IDE (like RStudio, VSCode, Spyder, etc), and then copy it back into the notebook. https://github.com/fastai/nbdev

> What other platforms are there for versioning data and recording data provenance?

Quilt also versions data and data pipelines: https://medium.com/pytorch/how-to-iterate-faster-in-machine-...

https://github.com/quiltdata/quilt (Python)

[+]
[+]

Is this an argument in favor of unjustified magic constant arbitrary priors?

[+]

Yeah, but cryptographic hashes have some entropy.

[+]

Is the question "Does AutoML-Zero minimize or maximize a cost function with error as a primary component, instead of using a binary win/lose classifier like AlphaGoZero?"

https://en.wikipedia.org/wiki/AlphaZero

[-]

Options for giving math talks and lectures online

One option: screencast development of a Jupyter notebook.

Jupyter Notebook supports LaTeX (MathTeX) and inline charts. You can create graded notebooks with nbgrader and/or with CoCalc (which records all (optionally multi-user) input such that you can replay it with a time slider).

Jupyter notebooks can be saved to HTML slides with reveal.js, but if you want to execute code cells within a slide, you'll need to install RISE: https://rise.readthedocs.io/en/stable/

Here are the docs for CoCalc Course Management; Handouts, Assignments, nbgrader: https://doc.cocalc.com/teaching-course-management.html

Here are the docs for nbgrader: https://nbgrader.readthedocs.io/en/stable/

You can also grade Jupyter notebooks in Open edX:

> Auto-grade a student assignment created as a Jupyter notebook, using the nbgrader Jupyter extension, and write the score in the Open edX gradebook

https://github.com/ibleducation/jupyter-edx-grader-xblock

Or just show the Jupyter notebook within an edX course: https://github.com/ibleducation/jupyter-edx-viewer-xblock

There are also ways to integrate Jupyter notebooks with various LMS / LRS systems (like Canvas, Blackboard, etc) "nbgrader and LMS / LRS; LTI, xAPI" on the "Teaching with Jupyter Notebooks" mailing list: https://groups.google.com/forum/#!topic/jupyter-education/_U...

"Teaching and Learning with Jupyter" ("An open book about Jupyter and its use in teaching and learning.") https://jupyter4edu.github.io/jupyter-edu-book/

> TLJH: "The Littlest JupyterHub" describes how to setup multi-user JupyterHub with e.g. Docker spawners that isolate workloads running with shared resources like GPUs and TPUs: http://tljh.jupyter.org/en/latest/

> "Zero to BinderHub" describes how to setup BinderHub on a k8s cluster: https://binderhub.readthedocs.io/en/latest/zero-to-binderhub...

If you create a git repository with REES-compatible dependency specification file(s), students can generate a container with all of the same software at home with repo2docker or with BinderHub.

> REES is one solution to reproducibility of the computational environment.

>> BinderHub ( https://mybinder.org/ ) creates docker containers from {git repos, Zenodo, FigShare,} and launches them in free cloud instances also running JupyterLab by building containers with repo2docker (with REES (Reproducible Execution Environment Specification)). This means that all I have to do is add an environment.yml to my git repo in order to get Binder support so that people can just click on the badge in the README to launch JupyterLab with all of the dependencies installed.

>> REES supports a number of dependency specifications: requirements.txt, Pipfile.lock, environment.yml, aptSources, postBuild. With an environment.yml, I can install the necessary CPython/PyPy version and everything else.

> REES: https://repo2docker.readthedocs.io/en/latest/specification.h...

> REES configuration files: https://repo2docker.readthedocs.io/en/latest/config_files.ht...

> Storing a container built with repo2docker in a container registry is one way to increase the likelihood that it'll be possible to run the same analysis pipeline with the same data and get the same results years later.

[-]

Aerogel from fruit biowaste produces ultracapacitors

dalf | 2020-03-04 06:29:43 | 152 | # | ^

> "Aerogel from fruit biowaste produces ultracapacitors with high energy density and stability" (2020) https://www.sciencedirect.com/science/article/pii/S2352152X1...

Years ago, I remember reading about supercapacitor electrodes made from what would be waste hemp bast fiber. They used graphene as a control. And IIRC, the natural branching structure in hemp (the strongest natural fiber) was considered ideal for an electrode.

"Hemp Carbon Makes Supercapacitors Superfast" https://www.asme.org/topics-resources/content/hemp-carbon-ma...

How do the costs and performance compare? Graphene, hemp, durian, jackfruit

While graphene production costs have fallen due to lots of recent research, IIUC all graphene production is hazardous due to graphene's ability to cross the lungs and the blood-brain barrier?

[+]

Hemp textiles are rough, but antimicrobial/antibacterial: hemp textiles resist growth of pneumonia and staph.

AFAIU, when they blend hemp with e.g. rayon it's good enough for underwear, sheets, scrubs.

The government is getting the heck out of the way of hemp, a great rotation crop that can be used for soul remediation.

[+]
[+]

#AccidentalArt.

(Freudian psychoanalytic projections are not supported by neuroimaging)

[+]
[+]

Technically, the 2013 farm bill (signed into law in 2014) authorized growing hemp for state-registered research purposes. https://www.votehemp.com/laws-and-legislation/federal-legisl...

Turns out UC Berkeley's got an approach for brewing cannabinoids (and I think terpenes) from yeast, and a company in Germany has a provisional patent application to brew cannabinoids from bacteria. We could be absorbing carbon ("sequestering" carbon) and coal ash acid rain with mostly fields of industrial hemp for which there are indeed thousands of uses.

[-]

Ask HN: How to Take Good Notes?

I want to improve my note-taking skill. I've started writing a text file with notes from class, however, I don't have a systematic way of writing. This means at this point I just wrote down, arbitrarily, things the professor said, things the professor wrote, how I understood the information, and everything else, mostly all over the place.

I'm wondering if anyone developed a system like this I could adapt to myself, and how did they do it.

[+]

> In 2009, psychologist Jackie Andrade asked 40 people to monitor a 2-½ minute dull and rambling voice mail message. Half of the group doodled while they did this (they shaded in a shape), and the other half did not. They were not aware that their memories would be tested after the call. Surprisingly, when both groups were asked to recall details from the call, those that doodled were better at paying attention to the message and recalling the details. They recalled 29% more information! https://www.health.harvard.edu/blog/the-thinking-benefits-of...

https://en.wikipedia.org/wiki/Doodle#Effects_on_memory references the same study.

Related articles on GScholar: https://scholar.google.com/scholar?q=related:YVG_-PKhNH4J:sc...

[+]

Many lectures and meetings may be experienced as similarly dross and irrelevant and a waste of time (though you can't expect people to just read the necessary information ahead of time, as flipped classrooms expect of committed learners).

What would be a better experimental design for measuring effect on memory retention of passively-absorbed lectures?

[-]

Ask HN: STEM toy for a 3 years old?

Hello! Can the HN community recommend me a STEM toy (or similar that would educate and entertain him) for my 3 yo boy? He's highly curious but I can't find many things to play with him :( The things that I like bore him and the things that he likes bore me (or are way too messy and dangerous to let him do them)...

"12 Awesome (& Educational) STEM Subscription Boxes for Kids" https://stemeducationguide.com/subscription-boxes-for-kids/

Tape measure with big numbers, ruler(s)

Measuring cup, water, ice.

"Melissa & Doug Sunny Patch Dilly Dally Tootle Turtle Target Game (Active Play & Outdoor, Two Color Self-Sticking Bean Bags, Great Gift for Girls and Boys - Best for 3, 4, 5, and 6 Year Olds)"

Set of wooden blocks in a wood box; such as "Melissa & Doug Standard Unit Blocks"

...

https://sugarlabs.org/ , GCompris mouse and keyboard games with a trackpad and a mouse, ABCMouse, Khan Academy Kids, Code.org, ScratchJr (5-7), K12 Computer Science Framework https://k12cs.org/

[-]

OpenAPI v3.1 and JSON Schema 2019-09

[+]

Defusedxml lists a number of XML "Attack vectors: billion laughs / exponential entity expansion, quadratic blowup entity expansion, external entity expansion (remote), external entity expansion (local file), DTD retrieval" https://pypi.org/project/defusedxml/

Are there similar vulnerabilities in JSON parsers (that would then also need to be monkeypatched)?

[+]

Yeah, expressing complex types with only primitives in a portable way is still unfortunately a challenge. For example, how do we encode a datetime without ISO8601 and a schema definition; or, how do we encode complex numbers with units like "1j+1"; or "1.01 USD/meter"?

Fortunately, we can use XSD with RDFS and RDFS with JSON-LD.

LDP: Linked Data Platform and Solid: social linked data (and JSON-LD) are the newer W3C specs for HTTP APIs.

For one, pagination is a native feature of LDP.

[+]

It's astounding how often people make claims like this.

There is a whole lot of RDF Linked Data; and it links together without needing ad-hoc implementations of schema-specific relations.

I'll just link to the Linked Open Data Cloud again, for yet another hater that's probably never done anything for data interoperability: https://lod-cloud.net/

That's a success.

[+]

JSON5 supports comments: https://json5.org/

[+]
[+]
[+]
[+]
[+]

When I searched for "JSON5 [language]", I found JSON5 implementations in/for Rust, C, Python, Java, and Haskell on the first page of search results.

I like YAML, but some of the syntax conveniences are gotchas: 'no' must be quoted, for example.

[-]

Git for Node.js and the browser using libgit2 compiled to WebAssembly

This looks useful. Are there pending standards for other browser storage mechanisms than an in-memory FS?

Would it be a security risk to grant limited local filesystem access by domain; with a storage quota?

... To answer my own question, it looks like the FileSystem API is still experimental and only browser extensions can request access to the actual filesystem: https://developer.mozilla.org/en-US/docs/Web/API/FileSystem

[+]
[-]

Scientists use ML to find an antibiotic able to kill superbugs in mice

[+]

The second-order costs avoided by treatments developed so innovatingly could be included in a "value to society" estimation.

"Acknowledgements" lists the grant funders for this federally-funded open access study.

"A Deep Learning Approach to Antibiotic Discovery" (2020) https://doi.org/10.1016/j.cell.2020.01.021

> Mutant generation

> Chemprop code is available at: https://github.com/swansonk14/chemprop

> Message Passing Neural Networks for Molecule Property Prediction

> A web-based version of the antibiotic prediction model described herein is available at: http://chemprop.csail.mit.edu/

> This website can be used to predict molecular properties using a Message Passing Neural Network (MPNN). In order to make predictions, an MPNN first needs to be trained on a dataset containing molecules along with known property values for each molecule. Once the MPNN is trained, it can be used to predict those same properties on any new molecules.

[-]

Shit – An implementation of Git using POSIX shell

kick | 2020-02-11 17:35:48 | 814 | # | ^
[+]
[+]
[+]
[+]

You can also set $GIT_PAGER/core.pager/$PAGER and create an alias to accomplish this:

  #export PAGER='less -SEXIER'
  #export GIT_PAGER='less -SEXIER'
  git config --global core.pager 'less -SEXIER'
  git config --global alias.l 'log --graph --oneline --decorate --color'
  # git diff ~/.gitconfig
  git l
core.pager: https://git-scm.com/docs/git-config#Documentation/git-config...

> The order of preference is the $GIT_PAGER environment variable, then core.pager configuration, then $PAGER, and then the default chosen at compile time (usually less).

[-]

HTTP 402: Payment Required

[+]

> The new W3C Payment Request API [4] makes it easy for browsers to offer a standard (and probably(?) already accessible) interface for the payment data entry screen, at least. https://www.w3.org/TR/payment-request/

[+]

It really could

Salesforce Sustainability Cloud Becomes Generally Available

> - Reduce emissions with trusted analytics from a trusted platform. Analyzing carbon emissions from energy usage and company travel can be daunting and time-consuming. But with all your data flowing directly onto one platform, you can efficiently quantify your carbon footprint. Formulate a climate action plan for your company from a single source of truth, built on our trusted and secure data platform.

> - Take action with data-driven insights. Prove to customers, employees, and potential investors your commitment to carbon-conscious and sustainable practices. Offer regulatory agencies a clear snapshot of your energy usage patterns. Extrapolate energy consumption and track carbon emissions with cutting-edge analytics — and take action.

> - Tackle carbon accounting audits in weeks instead of months. Carbon analysis can be an overwhelming time commitment, even a barrier to action for companies that want to get it right. Use preloaded datasets from the U.S. EPA, IPCC, and others to accurately assess your carbon accounting. Streamline your data gathering and climate action plan with embedded guides and user flows.

> - Empower decision makers with executive-ready dashboard data. Evaluate corporate environmental impact with rich data visualization and dashboards. Track energy patterns and emission trends, then make the business case to executives. Once an organization understands its carbon footprint, decision makers can begin to drive sustainability solutions.

Are there similar services for Sustainability Reporting and accountability? https://en.wikipedia.org/wiki/Sustainability_reporting

[-]

Httpx: A next-generation HTTP client for Python

[+]
[+]

FWIW, requests3 has "Type-annotations for all public-facing APIs", asyncio, HTTP/2, connection pooling, timeouts, etc https://github.com/kennethreitz/requests3

[+]

It looks like requests is now owned by PSF. https://github.com/psf/requests

But IDK why requests3 wasn't transferred as well, and why issues appear to be disabled on the repo now.

The docs reference a timeout arg (that appears to default to the socket default timeout) for connect and/or read https://3.python-requests.org/user/advanced/#timeouts

And the tests reference a timeout argument. If that doesn't work, I wonder how much work it would be to send a PR (instead of just talking s to Ken and not contributing any code)

[+]

TIL requests3 beta works with httpx as a backend: https://github.com/not-kennethreitz/team/issues/21#issuecomm...

If requests3 is installed, `import requests' imports requests3

[-]

BlackRock CEO: Climate Crisis Will Reshape Finance

+1. From the letter: https://www.blackrock.com/us/individual/larry-fink-ceo-lette...

> The money we manage is not our own. It belongs to people in dozens of countries trying to finance long-term goals like retirement. And we have a deep responsibility to these institutions and individuals – who are shareholders in your company and thousands of others – to promote long-term value.

> Climate change has become a defining factor in companies’ long-term prospects. Last September, when millions of people took to the streets to demand action on climate change, many of them emphasized the significant and lasting impact that it will have on economic growth and prosperity – a risk that markets to date have been slower to reflect. But awareness is rapidly changing, and I believe we are on the edge of a fundamental reshaping of finance.

> The evidence on climate risk is compelling investors to reassess core assumptions about modern finance. Research from a wide range of organizations – including the UN’s Intergovernmental Panel on Climate Change, the BlackRock Investment Institute, and many others, including new studies from McKinsey on the socioeconomic implications of physical climate risk – is deepening our understanding of how climate risk will impact both our physical world and the global system that finances economic growth.

Environmental, social and corporate governance > Responsible investment: https://en.wikipedia.org/wiki/Environmental,_social_and_corp...

Corporate social responsibility: https://en.wikipedia.org/wiki/Corporate_social_responsibilit...

UN-supported PRI: Principles for Responsible Investment (2,350 signatories (2019-04)) https://en.wikipedia.org/wiki/Principles_for_Responsible_Inv...

[-]

A lot of complex “scalable” systems can be done with a simple, single C++ server

[+]
[+]
[+]
[+]
[+]
[+]

Dask groupby example: https://examples.dask.org/dataframes/02-groupby.html

> Generally speaking, Dask.dataframe groupby-aggregations are roughly same performance as Pandas groupby-aggregations, just more scalable.

The dask.distributed scheduler can also run on one high-RAM instance (with threads or processes) https://docs.dask.org/en/latest/setup.html

Pandas docs > Ecosystem > Out-of-core: https://pandas.pydata.org/pandas-docs/stable/ecosystem.html#...

Reading from Parquet into Apache Arrow is much faster than CSV because the data can just be directly loaded into RAM. https://ursalabs.org/blog/2019-10-columnar-perf/

If you have GPU instances, cuDF has a Pandas-like API on top of Apache Arrow. https://github.com/rapidsai/cudf

> Built based on the Apache Arrow columnar memory format, cuDF is a GPU DataFrame library for loading, joining, aggregating, filtering, and otherwise manipulating data.

> cuDF provides a pandas-like API that will be familiar to data engineers & data scientists, so they can use it to easily accelerate their workflows without going into the details of CUDA programming.

Dask-ML makes scalable scikit-learn, XGBoost, TensorFlow really easy. https://dask-ml.readthedocs.io/en/latest/

... re: the OT: While it's possible to write C++ code that's really fast, it's generally inflexible, expensive to develop, and dangerous for devs with experience in their respective domains of experience to write. Much saner to put a Python API on top and optimize that during compilation.

There are a few C++ frameworks in the top quartile of the TechEmpower framework benchmarks. https://www.techempower.com/benchmarks/

Hardware/hosting is relatively cheap. Developers and memory vulnerabilities aren't.

[+]

https://docs.dask.org/en/latest/setup/hpc.html says dask-jobqueue handles "PBS, SLURM, LSF, SGE and other resource managers"

"Dask on HPC, what works and what doesn't" https://github.com/dask/dask-blog/issues/5

Maybe you should spend some time developing a job visualization system for end users from scratch, for end users with lots of C, JS, and HTML experience https://jobqueue.dask.org/en/latest/interactive.html

[+]
[-]

Warren Buffett is spending billions to make Iowa 'the Saudi Arabia of wind'

It's both cost-rational and environment-rational to invest heavily in clean energy (with or without the comparatively paltry tax incentives).

The long-term costs of climate change and inaction are unfortunately still mostly external costs to energy producers. We should expect that to change as we start developing competencies in evaluating the costs and frequency of weather disasters exacerbated by anthropogenic climate change. We all get to pay for floods, fires, tornados, hurricanes, landslides, blizzards, and the gosh darn heat.

Insurance firms clearly see these costs. Our military sees the costs of responding to natural disasters. Local economies see the costs of months and years spent on disaster relief; on just getting back up to speed so that they can generate profit from selling goods and services (and pay taxes to support disaster relief efforts essential to operational readiness).

The cost per kilowatt hour of wind (and solar) energy is now lower than operating existing dirty energy plants that dump soot on our crops, air, and water.

With wind, they talk about the "alligator curve". With solar, it's the "duck curve". Grid-scale energy storage is necessary for reaching 100% renewable energy as soon as possible.

Iowa's renewable energy tax incentives are logically aligned with international long-term goals:

UN Sustainable Development Goal 7: Affordable and Clean Energy https://www.globalgoals.org/7-affordable-and-clean-energy

Goal 13: Climate Action https://www.globalgoals.org/13-climate-action

SDG Target 12.6: "Encourage companies to adopt sustainable practices and sustainability reporting" (CSR; e.g. GRI Sustainability Reporting Standards that we can score portfolios with)

https://www.undp.org/content/undp/en/home/sustainable-develo... :

> Rationalize inefficient fossil-fuel subsidies that encourage wasteful consumption by removing market distortions, in accordance with national circumstances, including by restructuring taxation and phasing out those harmful subsidies, where they exist, to reflect their environmental impacts, taking fully into account the specific needs and conditions of developing countries and minimizing the possible adverse impacts on their development in a manner that protects the poor and the affected communities

...

> Thanks. How can I say "try and only run this [computational workload] in zones with 100% PPA offsets or 100% directly sourced #CleanEnergy"? #Goal7 #Goal11 #Goal12 #Goal13 #GlobalGoals #SDGs

It makes good business sense to invest in clean energy to take advantage of tax incentives, minimize future costs to other business units (e.g. insurance, taxes), and earn the support of investors choosing portfolios with long term environmental (and thus economic) sustainability as a primary objective.

[-]

Scientists Likely Found Way to Grow New Teeth for Patients

"Scientists Have Discovered a Drug That Fixes Cavities and Regrows Teeth" https://futurism.com/neoscope/scientists-have-discovered-thi...

Tideglusib https://en.wikipedia.org/wiki/Tideglusib

[+]
[+]
[-]

Announcing the New PubMed

[+]

This looks great: I like the search timeline, the ability to easily search for free full-text meta-analyses (a selection bias we should all be aware of), the MeSH term listing in a reasonably-sized font, and that there's schema.org/ImageObject metadata within the page, but there's no [Medical]ScholarlyArticle metadata?

I've worked with Google Scholar (:o) [1], Semantic Scholar (Allen Institute for AI) [2], Meta (Chan Zuckerberg Institute) [3], Zotero, Mendeley and a number of other tools for indexing and extracting metadata and graph relations from https://schema.org/ScholarlyArticle and MedicalScholarlyArticles . Without RDFa (or Microdata, or JSON-LD) in PDF, there's a lot of parsing that has to go down in order to get a graph from the citations in the article. Each service adds value to this graph of resources. Pushing forward on publishing linked research that's reproducible (#LinkedResearch, #LinkedReproducibility) is a worthwhile investment in meta-research that we have barely yet addressed:

> http://Schema.org/NewsArticle .citation: https://schema.org/citation ... Wouldn't it be great if NewsArticles linked to the ScholarlyArticle and/or Notebook CreativeWorks that they're .about (with reified relations)?

> A practical use case: Alice wants to publish a ScholarlyArticle [1] (in HTML with structured data, as a PDF) predicated upon Datasets [2] (as CSV, CSVW JSONLD, XLSX (DataDownload)) with static HTML (and no special HTTP headers). 1 https://schema.org/ScholarlyArticle 2 https://schema.org/Dataset*

> B wants to build a meta analysis: to collect a # of ScholarlyArticles and Dataset DataDownloads; review study controls and data; merge, join, & concatenate Datasets if appropriate, and inductively or deductively infer a conclusion and suggestions for further studies of variance*

The Linked Open Data Cloud shows the edges, the relations, the structured data links between very many (life sciences) datasets: https://lod-cloud.net/ . https://5stardata.info/en/ lists TimBL's suggested 5-start deployment schema for Open Data; which culuminates in publishing linked open data in non-proprietary formats that uses URIs to describe and link to things.

Could any of these [1][2][3][4][5] services cross-link the described resources, given a common URI identifier such as https://schema.org/identifier and/or https://schema.org/url ? ORCID is a service for generating stable identifiers for researchers and publishers who have names in common but different emails. W3C DID solves for this need in a different way.

When I check an article result page with the OpenLink OSDS extension (or any of a number of other tools for extracting structured data from HTML pages (and documents!) https://github.com/CodeForAntarctica/codeforantarctica.githu... ), there could be quite a bit more data there for search engines, browser extensions, and meta-research tools.

Is this something like ElasticSearch on the backend? It is possible to store JSON-LD documents in the search index. I threw together elasticsearchjsonld to "Generate JSON-LD @contexts from ElasticSearch JSON Mappings" for the OpenFDA FAERS data a few year ago. That's not GraphQL or SPARQL, but it's something and it's Linked Data.

re: "Canada's Decision To Make Public More Clinical Trial Data Puts Pressure On FDA" https://news.ycombinator.com/item?id=21232183

> We really could get more out of this data through international collaboration and through linked data (e.g. URIs for columns). See: "Open, and Linked, FDA data" https://github.com/FDA/openfda/issues/5#issuecomment-5392966... and "ENH: Adverse Event Count / 'Use' Count Heatmap" https://github.com/FDA/openfda/issues/49 . With sales/usage counts, we'd have a denominator with which we could calculate relative hazard.

W3C Web Annotations handle threaded comments and highlights; reviewing the reviewers is left as an exercise for the reader. Does Zotero still make it easy to save the bibliographic metadata for one or more ScholarlyArticles from PubMed to a collection in the cloud (and add metadata/annotations)?

Sorry to toot my own horn here. Great job on this. This opens up many new opportunities for research.

[1] https://scholar.google.com

[2] https://www.semanticscholar.org/

[3] https://www.meta.org/

[4] https://zotero.org/

[5] https://mendeley.org/

[+]
[+]
[-]

Ask HN: Is it worth it to learn C in 2020?

The GNU/Linux kernel, FreeBSD kernel, Windows kernel, MacOS kernel, Python, Ruby, Perl, PHP, NodeJS, and NumPy are all written in C. If you want to review and contribute code, you'd need to learn C.

There are a number of coding guidelines e.g. for safety-critical systems where bounded running time and resource consumption are essential. These coding guidelines and standards are basically only available for C, C++, and Ada. https://github.com/stanislaw/awesome-safety-critical/blob/ma...

Even though modern languages have garbage-collection that runs whenever it feels like it, It's helpful to learn about memory management in C (or C++). You'll appreciate object destructor methods that free memory and sockets and file handles that much more. Reference cycles in object graphs are easier to handle with modern C++ than with C. Are there RAII (Resource Acquisition is Initialization) "smart pointers" that track reference counts in C?

Without OO namespacing, in C, function names are often prefixed with namespaces. How many ways could a struct be initialized? When can I free that memory?

When strace prints a syscall, what is that?

Is it necessary to learn C? Somebody needs to maintain and improve the C-based foundation for most of our OSs and very many of our fancy scripting languages. C can be very unforgiving: it's really easy to do it wrong, and there's a lot to keep in mind at once: the cognitive burden is higher with C (and then still with ASM and WebASM) than with an interpreted (or compiled) duck-typed 3GL scripting language with first-class functions.

What's a good progression that includes syntax, finding and reading the libc docs, Make/CMake/Autotools, secure recommended compiler flags for GCC (CPPFLAGS, CFLAGS, LDFLAGS) and LLVM Clang?

C: https://learnxinyminutes.com/docs/c/

C++: https://learnxinyminutes.com/docs/c++/

Links to the docs for Libc and other tools: https://westurner.github.io/tools/#libc

xeus-cling is a Jupyter kernel for C++ (and most of C) that works with nbgrader. https://github.com/QuantStack/xeus-cling

What's a better unit-testing library for C/C++ than gtest? https://github.com/google/googletest/

[+]

For network programming, you might consider asynchronous programming with coroutines. C++20 has them and they're already supported in LLVM. For C, there are a number of implementations of coroutines: https://en.wikipedia.org/wiki/Coroutine#Implementations_for_...

> Once a second call stack has been obtained with one of the methods listed above, the setjmp and longjmp functions in the standard C library can then be used to implement the switches between coroutines. These functions save and restore, respectively, the stack pointer, program counter, callee-saved registers, and any other internal state as required by the ABI, such that returning to a coroutine after having yielded restores all the state that would be restored upon returning from a function call. Minimalist implementations, which do not piggyback off the setjmp and longjmp functions, may achieve the same result via a small block of inline assembly which swaps merely the stack pointer and program counter, and clobbers all other registers. This can be significantly faster, as setjmp and longjmp must conservatively store all registers which may be in use according to the ABI, whereas the clobber method allows the compiler to store (by spilling to the stack) only what it knows is actually in use.

CPython's asyncio implementation (originally codenamed 'tulip') is written in C and IMHO much easier to use than callbacks like Twisted and JS before Promises and the inclusion of tulip-like async/await keywords in ECMAscript. Uvloop - based on libuv, like Node - is apparently the fastest asyncio event loop. CPython Asyncio C module source: https://github.com/python/cpython/blob/master/Modules/_async... Asyncio docs: https://docs.python.org/3/library/asyncio.html

(When things like file or network I/O are I/O bound, the program can yield to allow other asynchronous coroutines ('async') to run on that core. With network programming, we're typically waiting for things to send or reply.)

Return-oriented-programming > Return-into-library technique is an interesting read regarding system programming :) https://en.wikipedia.org/wiki/Return-oriented_programming#Re...

[-]

Free and Open-Source Mathematics Textbooks

This is a good list of books. Unfortunately many of the links are broken? Probably just my luck, but the first few "with Sage" books I excitedly selected unfortunately 404'd. I'll send an email.

> Moreover, the American Institute of Mathematics maintains a list of approved open-source textbooks. https://aimath.org/textbooks/approved-textbooks/

I also like the (free) Green Tea Press books: Think Stats, Think Bayes, Think DSP, Think Complexity, Modeling and Simulation in Python, Think Python 2e: How To Think Like a Computer Scientist https://greenteapress.com/wp/

And IDK how many times I've recommended the book for the OCW "Mathematics for Computer Science" course: https://ocw.mit.edu/courses/electrical-engineering-and-compu...

There may be a newer edition than the 2017 version of the book: https://courses.csail.mit.edu/6.042/spring17/mcs.pdf

[+]
[+]
[+]
[+]
[-]

Make CPython segfault in 5 lines of code

FWIW, this segfaults CPython in 2 lines:

  import ctypes
  ctypes.cast(1, ctypes.py_object)
Interestingly, this works:

  import ctypes, gc
  x = 22
  _id = id(x)
  del x
  gc.collect()
  y = ctypes.cast(_id, ctypes.py_object).value
  assert y == 22

[-]

Applications Are Now Open for YC Startup School – Starts in January

[+]

> In the town of 14,000 I currently reside in it's pretty difficult to network in a meaningful way and talk about my company with folks that can give guidance and feedback

GitLab and Zapier are examples of all remote former YC companies.

"GitLab Handbook" https://about.gitlab.com/handbook/

"The Ultimate Guide to Remote Work: Lessons from a team of over 200 remote workers" https://zapier.com/learn/remote-work/

[+]

Startup School is now designed as a remote program.

It'd be interesting to hear from them about building all remote team culture with transparency and accountability. Are text-chat "digital stand up meetings" with quality transcripts of each team member's responses to the three questions enough? ( Yesterday / Today and Tomorrow / Obstacles // What did I do since the last time we met? What will I do before the next time we meet? What obstacles are blocking my progress? )

Or are there longer term planning sessions focusing on a plan for delivering value on a far longer term than first getting the MVP down and maximizing marginal profit by minimizing costs?

[-]

‘Adulting’ is hard. UC Berkeley has a class for that

+1 for Life Skills for Adulting and also Home Economics including Family and Meal Planning.

A bunch of resources from "Consumer science (a.k.a. home economics) as a college major" https://news.ycombinator.com/item?id=17894632 : CS 007: Personal Finance for Engineers, r/personalfinance/wiki, Healthy Eating Plate, Khan Academy > Science > Health and Medicine

And also, Instant Pot. The Instant Pot pressure cooker is your key to nutrient preservation and ultimate happiness.

[-]

Five cities account for vast majority of growth in U.S. tech jobs: study

[+]

> To that end, the present paper proposes that Congress assemble and award to a select set of metropolitan areas a major package of federal innovation inputs and supports that would accelerate their innovation-sector scale-up. Along these lines, we envision Congress establishing a rigorous competitive process by which the most promising eight to 10 potential growth centers would receive substantial financial and regulatory support for 10 years to become self-sustaining new innovation centers. Such an initiative would not only bring significant economic opportunity to more parts of the nation, but also boost U.S. competitiveness on the global stage.

"Potential growth centers" sounds promising.

[-]

Don’t Blame Tech Bros for the Housing Crisis

If there is demand for housing, we would expect people to be finding land and building housing unless there are policies that prevent this (and/or long commutes that people don't want to suffer) or higher-value opportunities.

If the city wanted residential areas (over commercial tax revenue giants), the city should have zoned residential.

The people elect city leaders. The people all want affordable housing.

With $4.5b from corporations and nowhere to build but out or up, high rise residential is the most likely outcome. (Which is typical for dense urban areas that have prioritized and attracted corporate tax revenue over affordable housing)

... Effing scooter bros with their scooters and their gold rush money and their tiny houses.

[Edit: more than] One company says "I will pay you $10,000 to leave the Bay Area / Silicon Valley" Because there's a lot of tech talent (because universities and opportunities) but ridiculously high expenses.

What an effectual headline from NY.

[-]

Docker is just static linking for millenials

No, LXC does quite a bit more than static linking. An inability to recognize that likely has nothing to do with generation.

Can you launch a process in a chroot, with cgroups? Okay, now upgrade everything it's linked with (without breaking the host/build system)

Configure a host-only network for a few processes – running in separate cgroups – without DHCP.

Criticize Docker? Rootless builds and containers are essentially impossible. Buildah and podman make rootless builds possible without a socket. Like sysvinit, though, IDK how well centralized logging (and logshipping, and logged crashes and restarts) works without that socket.

Given comments like this, it's likely that you've never built a chroot for a different distro. Or launchd a process with cgroups.

[-]

Show HN: Bamboolib – A GUI for Pandas (Python Data Science)

This looks excellent. The ability to generate the Python code for the pandas dataframe transformations looks to be more useful than OpenRefine, TBH.

How much work would it be to use Dask (and Dask-ML) as a backend?

I see the OneHotEncoder button. Have you considered integration with Yellowbrick? They've probably already implemented a few of your near-future and someday roadmap items involving hyperparameter selection and model selection and visualization? https://www.scikit-yb.org/en/latest/

This video shows more of the advanced bamboolib features: https://youtu.be/I0a58h1OCcg

The live histogram rebinning looks useful. Recently I read about a 'shadowgram' / ~KDE approach with very many possible bin widths translucently overlaid in one chart. https://stats.stackexchange.com/questions/68999/how-to-smear...

Yellowbrick also has a bin width optimization visualization in yellowbrick.target.binning.BalancedBinningReference: https://www.scikit-yb.org/en/latest/api/target/binning.html

Great work.

[+]

In the past, I've looked at OpenRefine and Jupyter integration. Once I've learned to do data transformation with pandas and sklearn with code, I'll report back to you.

Pandas-profiling has a number of cool descriptive statistics features as well. https://github.com/pandas-profiling/pandas-profiling

There's a new IterativeImputer in Scikit-learn 0.22 that it'd be cool to see visualizations of. https://twitter.com/TedPetrou/status/1197150813707108352 https://scikit-learn.org/stable/modules/impute.html

A plugin model would be cool; though configuring the container every time wouldn't be fun. Some ideas about how we could create a desktop version of binderhub in order to launch REES-compatible environments on our own resources: https://github.com/westurner/nbhandler/issues/1

[+]
[+]

Set difference and/or intersection of dir(pd.DataFrame) and dir(dask.DataFrame) with inspect.getargspec and inspect.doc would be a useful document for either or both projects.

pyfilemods generates a ReStructuredText document with introspected API comparisons. "Identify and compare Python file functions/methods and attributes from os, os.path, shutil, pathlib, and path.py" https://github.com/westurner/pyfilemods

[-]

Battery-Electric Heavy-Duty Equipment: It's Sort of Like a Cybertruck

> They’ve created a single platform that can be easily modified to do any number of jobs. For instance, their flagship product, the Dannar 4.00, can accept over 250 attachments from CAT, John Deere, or Bobcat. […] Having interoperability with so many different types of equipment, one platform can easily perform many tasks over the course of a year. This is a huge win for cash strapped municipalities. Why would a company or municipality opt to have a backhoe parked all winter long when it could be doing another job?

Does it have regenerative brakes?

CSR: Corporate Social Responsibility

> Proponents argue that corporations increase long-term profits by operating with a CSR perspective, while critics argue that CSR distracts from businesses' economic role.

... The 3 Pillars of Corporate Sustainability: Environmental, Social, Economic https://www.investopedia.com/articles/investing/100515/three...

Three dimensions of sustainability: (Environment (Society (Economy))) https://en.wikipedia.org/wiki/Sustainability#Three_dimension...

What are some of the corporate sustainability reporting standards?

How can I score a candidate portfolio with sustainability metrics in order to impact invest with maximum impact?

> What are some of the corporate sustainability reporting standards?

From https://en.wikipedia.org/wiki/Sustainability_reporting#Initi... :

>> Organizations can improve their sustainability performance by measuring (EthicalQuote (CEQ)), monitoring and reporting on it, helping them have a positive impact on society, the economy, and a sustainable future. The key drivers for the quality of sustainability reports are the guidelines of the Global Reporting Initiative (GRI),[3] (ACCA) award schemes or rankings. The GRI Sustainability Reporting Guidelines enable all organizations worldwide to assess their sustainability performance and disclose the results in a similar way to financial reporting.[4] The largest database of corporate sustainability reports can be found on the website of the United Nations Global Compact initiative.

The GRI (Global Reporting Initiative) Standards are now aligned with the UN Sustainable Development Goals (#GlobalGoals). https://en.wikipedia.org/wiki/Global_Reporting_Initiative

>> In 2017, 63 percent of the largest 100 companies (N100), and 75 percent of the Global Fortune 250 (G250) reported applying the GRI reporting framework.[3]

> How can I score a candidate portfolio with sustainability metrics in order to impact invest with maximum impact?

Does anybody have solutions for this? AFAIU, existing cleantech funds are more hand-picked than screened according to sustainability fundamentals.

[-]

GTD Tickler file – a proposal for text file format

Taskwarrior is also built upon the todo.txt format. [1]

Taskw supports various task dates – { due: scheduled: wait: until: recur: } [2]

Taskw supports various named dates like soq/eocq, som/eom (start/end of [current] quarter, start/end of month), tomorrow, later [3]

Taskw recurring tasks (recur:) use the duration syntax: weekly/wk/w, monthly/mo, quarterly/qtr, yearly/yr, … [4]

Pandas has a "date offset" "frequency string" microsyntax that supports business days, quarters, and years; e.g. BQuarterEnd, BQuarterBegin [5]

IDK how usable by other tools these date string parsers are.

W/ just a text editor, having `todo.txt`, `daily.todo.txt`, and `weekly.todo.txt` (and `cleanhome.todo.txt` and `hygiene.todo.txt` with "## heading" tasks that get lost @where +sorting) works okay.

I have physical 43 folders, too: A 12 month and a 31 day expanding file. [6]

[1] http://todotxt.org/

[2] https://taskwarrior.org/docs/using_dates.html

[3] https://taskwarrior.org/docs/named_dates.html

[4] https://taskwarrior.org/docs/durations.html

[5] https://pandas.pydata.org/pandas-docs/stable/user_guide/time...

[6] http://www.43folders.com/

[-]

Ask HN: Any suggestion on how to test CLI applications?

Hello HN!

I've been looking at alternatives on how to test command line applications, specifically, for example, exit codes, output messages and whatnot. I've seen "bats" https://github.com/sstephenson/bats and Bazel for testing but I'm curious as what other tools people use in a day to day basis. UI testing is nice with tools like Cypress.io and maybe there's something out there that isn't as popular but it's useful.

Thoughts?

pytest-docker-pexpect: https://github.com/nvbn/pytest-docker-pexpect

Pexpect: https://pexpect.readthedocs.io/en/stable/

pytest with subprocess.popen (or Sarge) may be sufficient for checking return codes and checking stdout and stderr output streams. Pytest has tmp_path and tmpdir fixtures that provide less test isolation than Docker containers: http://doc.pytest.org/en/latest/tmpdir.html

sarge.Capture.expect() takes a regex and returns None if there's no match: https://sarge.readthedocs.io/en/latest/tutorial.html#looking...

The Golden Butterfly and the All Weather Portfolio

The Golden Butterfly (is a modified All Weather Portfolio)

> Stocks: 20% Domestic Large Cap Fund (Vanguard’s VTI or Goldman Sach’s JUST), 20% Domestic Small Cap Value (Vanguard’s VBR)

> Bonds: 20% Long Term (Vanguard’s BLV), 20% Short Term (Vanguard’s BSV)

> Real Assets: 20% Gold (SPDR’s GLD)

The All Weather Portfolio:

> Stocks: 30% Domestic Total Stock Market (VG total stock)

> Bonds: 40% Long Term, 15% Intermediate-Term

> Real Assets: 7.5% Commodities, 7.5% Gold

What about investing in sustainable, innovative startups and small businesses (and crowdfunding campaigns)? What about direct capital investment? What about the American dream?

(Small businesses are a significant source of growth in our economy today and for the future)

[-]

Canada's Decision To Make Public More Clinical Trial Data Puts Pressure On FDA

We really could get more out of this data through international collaboration and through linked data (e.g. URIs for columns). See: "Open, and Linked, FDA data" https://github.com/FDA/openfda/issues/5#issuecomment-5392966... and "ENH: Adverse Event Count / 'Use' Count Heatmap" https://github.com/FDA/openfda/issues/49

With sales/usage counts, we'd have a denominator with which we could calculate relative hazard.

[-]

Python Alternative to Docker

Shiv does not solve for what containers and Docker/Podman/Buildah/Containerd solve for: re-launching processes at boot and failure, launching processes in chroots or cgroups (with least privileges), limiting access to network ports, limiting access to the host filesystem, building chroots / images, [...]

You can run build tools like shiv with a RUN instruction in a Dockerfile and get some caching.

You can build a zipapp with shiv (in a build container) and run the zipapp in a container.

Should the zipapp contain the test suite(s) and test_requires so that the tests can be run in an environment most similar to production?

It's much easier to develop with code on the filesystem (instead of in a zipapp).

It's definitely faster to read the whole zipapp into RAM than to stat and read each imported module from the filesystem once at startup.

There may be a better post title than the current "Python Alternative to Docker"? Shiv is a packaging utility for building Python zipapps. Shiv is not an alternative to process isolation with containers (or VMs)

[-]

$6B United Nations Agency Launches Bitcoin, Ethereum Crypto Fund

"UNICEF launches Cryptocurrency Fund: UN Children’s agency becomes first UN Organization to hold and make transactions in cryptocurrency" https://www.unicef.org/press-releases/unicef-launches-crypto...

From https://www.unicefusa.org/ :

> UNICEF USA helps save and protect the world's most vulnerable children. UNICEF USA is rated one of the best charities to donate to: 89% of every dollar spent goes directly to help children.

[-]

Supreme Court allows blind people to sue retailers if websites aren't accessible

"a11y": Accessibility

https://a11yproject.com/ has patterns, a checklist for checking web accessibility, resources, and events.

awesome-a11y has a list of a number of great resources for developing accessible applications: https://github.com/brunopulis/awesome-a11y

In terms of W3C specifications [1], you've got: WAI-ARIA (Web Accessibility Initiative: Accessibile Rich Internet Applications) [2], and WCAG: Web Content Accessibility Guidelines [3]. The new W3C Payment Request API [4] makes it easy for browsers to offer a standard (and probably(?) already accessible) interface for the payment data entry screen, at least.

There are a number of automated accessibility testing platforms. "[W3C WAI] Web Accessibility Evaluation Tools List" [5] lists quite a few. Can someone recommend a good accessibility testing tools? Is Google Lighthouse (now included with Chrome Devtools and as a standalone script) a good tool for accessibility reviews?

[1] https://github.com/brunopulis/awesome-a11y/blob/master/topic...

[2] https://www.w3.org/TR/using-aria/

[3] https://www.w3.org/WAI/standards-guidelines/wcag/

[4] https://www.w3.org/TR/payment-request/

[5] https://www.w3.org/WAI/ER/tools/

[-]

Streamlit: Turn a Python script into an interactive data analysis tool

Cool!

requests_cache caches HTML requests into one SQLite database. [1] pandas-datareader can cache external data requests with requests-cache. [2]

dask.cache can do opportunistic caching (of 2GB of data). [3]

How does streamlit compare to jupyter voila dashboards (with widgets and callbacks)? They just launched a new separate github org for the project. [4] There's a gallery of voila dashboard examples. [5]

> Voila serves live Jupyter notebooks including Jupyter interactive widgets.

> Unlike the usual HTML-converted notebooks, each user connecting to the Voila tornado application gets a dedicated Jupyter kernel which can execute the callbacks to changes in Jupyter interactive widgets.

> - By default, voila disallows execute requests from the front-end, preventing execution of arbitrary code.

[1] https://github.com/reclosedev/requests-cache

[2] https://pandas-datareader.readthedocs.io/en/latest/cache.htm...

[3] https://docs.dask.org/en/latest/caching.html

[4] https://github.com/voila-dashboards/voila

[5] https://blog.jupyter.org/a-gallery-of-voil%C3%A0-examples-a2...

Acess control and resource exhaustion are challenges with building any {Flask, framework_x,} app [from Jupyter notebooks]. First it's "HTTP Digest authentication should be enough for now"; then it's "let's use SSO and LDAP" (and review every release); then it's "why is it so sloww?". JupyterHub has authentication backends, spawners, and per-user-container/vm resource limits.

> Each user on your JupyterHub gets a slice of memory and CPU to use. There are two ways to specify how much users get to use: resource guarantees and resource limits. [6]

[6] https://zero-to-jupyterhub.readthedocs.io/en/latest/user-res...

Some notes re: voila and JupyterHub:

> The reason for having a single instance running voila only is to allow non JupyterHub users to have access to the dashboards. So without going through the Hub auth flow.

> What are the requirements in your case? Voila can be installed in the single user Docker image, so that each user can also use it on their own server (as a server extension for example). [7]

[7] https://github.com/voila-dashboards/voila/issues/112

[-]

Scott’s Supreme Quantum Supremacy FAQ

Who even asked these questions?

I question this. All of this.

[+]

I believe Feynman originally asked the QC question many many years ago. What an exciting milestone and a great FAQ.

"Always naysaying! Everything I create!"

[-]

Ask HN: How do you handle/maintain local Python environments?

I'm having some trouble figuring out how to handle my local Python. I'm not asking about 2 vs 3 - that ship has sailed - I'm confused on which binary to be using. From the way I see it, there's at least 4 different Pythons I could be using:

1 - Python shipped with OS X/Ubuntu

2 - brew/apt install python

3 - Anaconda

4 - Getting Python from https://www.python.org/downloads/

And that's before getting into how you get numpy et al installed. What's the general consensus on which to use? It seems like the OS X default is compiled with Clang while brew's version is with GCC. I've been working through this book [1] and found this thread [2]. I really want to make sure I'm using fast/optimized linear algebra libraries, is there an easy way to make sure? I use Python for learning data science/bioinformatics, learning MicroPython for embedded, and general automation stuff - is it possible to have one environment that performs well for all of these?

[1] https://www.amazon.com/Python-Data-Analysis-Wrangling-IPython/dp/1449319793

[2] https://www.reddit.com/r/Python/comments/46r8u0/numpylinalgsolve_is_6x_faster_on_my_mac_than_on/

[+]

I also prefer conda for the same reasons.

Precompiled MKL is really nice. Conda and conda-forge now build for aarch64. There are very few wheels for aarch64 on PyPI. Conda can install things like Qt (IPython-qt, spyder,) and NodeJS (JupyterLab extensions).

If I want to switch python versions for a given condaenv (instead of just creating a new condaenv for a different CPython/PyPy version), I can just run e.g. `conda install -y python=3.7` and it'll reinstall everything in the depgraph that depended on the previous python version.

I always just install miniconda instead of the whole anaconda distribution. I always create condaenvs (and avoid installing anything in the root condaenv) so that I can `conda-env export -f environment.yml` and clean that up.

BinderHub ( https://mybinder.org/ ) creates docker containers from {git repos, Zenodo, FigShare,} and launches them in free cloud instances also running JupyterLab by building containers with repo2docker (with REES (Reproducible Execution Environment Specification)). This means that all I have to do is add an environment.yml to my git repo in order to get Binder support so that people can just click on the badge in the README to launch JupyterLab with all of the dependencies installed.

REES supports a number of dependency specifications: requirements.txt, Pipfile.lock, environment.yml, aptSources, postBuild. With an environment.yml, I can install the necessary CPython/PyPy version and everything else.

...

In my dotfiles, I have a setup_miniconda.sh script that installs miniconda into per-CPython-version CONDA_ROOT and then creates a CONDA_ENVS_PATH for the condaenvs. It may be overkill because I could just specify a different python version for all of the conda envs in one CONDA_ENVS_PATH, but it keeps things relatively organized and easily diffable: CONDA_ROOT="~/-wrk/-conda37" CONDA_ENVS_PATH="~/-wrk/-ce37"

I run `_setup_conda 37; workon_conda|wec dotfiles` to work on the ~/-wrk/-ce37/dotfiles condaenv and set _WRD=~/-wrk/-ce37/dotfiles/src/dotfiles.

Similarly, for virtualenvwrapper virtualenvs, I run `WORKON_HOME=~/-wrk/-ve37 workon|we dotfiles` to set all of the venv cdaliases; i.e. then _WRD="~/-wrk/-ve37/dotfiles/src/dotfiles" and I can just type `cdwrd|cdw` to cd to the working directory. (Some of the other cdaliases are: {cdwrk, cdve|cdce, cdvirtualenv|cdv, cdsrc|cds}. So far, I have implemented cdalias support for bash, IPython, and vim)

One nice thing about defining _WRD is I can run `makew <tab>` and `gitw` to `cd $_WRD; make <tab>` and `git -C $_WRD` without having to change directory and then `cd -` to return to where I was.

So, for development, I use a combination of virtualenvwrapper, pipsi, conda, and some shell scripts in my dotfiles that I should get around to releasing and maintaining someday. https://westurner.github.io/dotfiles/venv

For publishing projects, I like environment.yml because of the REES support.

[-]

Is the era of the $100 graphing calculator coming to an end?

For $100, you can buy a Pinebook with an 11" or 14" screen, a multitouch trackpad, gigabytes of storage, WiFi, a keyboard without a numpad, and an ARM processor.

On this machine, you can create reproducible analyses with JupyterLab; do arithmetic with Python; work with multidimensional arrays with NumPy, SciPy, Pandas, xarray, Dask; do machine learning with Statsmodels, Scikit-learn, Dask-ML, TPOT; create books of these notebooks (containing code and notes (in Markdown, which easily transformed to HTML) and LaTeX equations) with jupyter-book, nbsphinx, git + BinderHub; store the revision history of your discoveries; publish what you've discovered and learned to public or private git repositories; and complete graded exercises with nbgrader.

But the task is to prepare for a world of mental arithmetic, no validation, no tests, no reference materials, and no search engines; and CAS (Computer Algebra Systems) systems like SymPy and Sage are not allowed.

On this machine, you can run write code, write papers, build spreadsheets and/or Jupyter notebooks, run physical simulations, explore the stars, and play games and watch videos. Videos like: Khan Academy videos and exercises that you can watch and do, with validation, until you've achieved mastery and move on to the next task on your todo.txt list.

But the task is to preserve your creativity and natural curiosity despite the compulsory education system's demands for quality control and allocative efficiency; in an environment where drama and popularity are the solutions to relatedness and acceptance needs.

I have three of these $100 calculators in my toolbox. It's been so long since I've powered them on that I'm concerned that the rechargeable AAA batteries are leaking battery acid.

For $100, you can buy an ARM notebook and install conda and conda-forge packages and build sweet visualizations to collaborate with colleagues on (with Seaborn (matplotlib), HoloViews, Altair, Plotly)

"You must buy a $100 calculator that only runs BASIC and ASM, and only use it for arithmetic so that we can measure you."

Hand tools are fun, but please don't waste any more of my compulsory time.

[+]
[-]

Reinventing Home Directories

[+]
[+]
[+]
[+]
[+]

Why do you think that is? Why have production grade Linux distributions all chosen to adopt systemd?

With SysV init, how do you securely launch processes in cgroups, such that they'll consistently restart when the process happens to terminate, with stdout and stderr logged with consistent timestamps, with process dependency models that allow for faster boots due to parallelization?

(edit)

Journalctl is far better than `tail -f /var/log/starstar` and parsing all of those timestamps and inconsistently escaped logfile formats. There's no good way to modify everything in /etc/init.d in order to log to syslog-ng or rsyslog. Systemd and journalctl solve for that; for unified logging.

IMO, there's no question that systemd is the better way and I have zero nostalgia for spawning everything from a probably a shell specified in an /etc/init.d shebang without process restarting (and logging thereof), cgroups, and consistent logging.

[+]

When journald logfile corruption occurs, it's detected and it starts writing a new logfile.

When flatfile logfile corruption occurs, it's not detected and there are multiple logfile formats to contend with. And multiple haphazard logrotate configs.

Here's how to use a separate process to ship journald logs - from one file handle - to a remote logging service: https://unix.stackexchange.com/questions/394822/how-should-i...

While there is a systemd-journal-remote, it's not necessary for journald to try and replicate what's already solved and tested in rsyslog and syslog-ng.

It's quite a bit more work to add every new service to the syslog-ng or rsyslog configuration than to just ship one journald log.

Furthermore, service start/stop events are already in the same stream (with the same timestamp format) with the services' stdout and stderr.

Why hasn't anyone written fsck for corrupted journald recovery?

...

I have not needed to makedev and chown and chatted and chcon anything in very many years. When you accidentally newbishly delete something from a static /dev and rebooting doesn't work and you have no idea what the major minor is or was, it sucks bad.

When you're trying to boot a system on a different machine but it doesn't work because the NIC is in a different bus, it's really annoying to have to symlink /dev or modify /etc. With udevd, all you need to do is define a rule to map the busid device name to e.g. eth0. I can remember encountering the devfs race condition resulting in eth0 and eth1 being mapped to different devices on different boots; which was dangerous because firewall rules are applied to device names.

Udev has been in the kernel since 2.6.

"What problems does udev actually solve?" https://superuser.com/questions/686774/what-problems-does-ud...

With integrated udev and systemd, I have no reason to run a separate hotplugd with a different config format (again with no cgroup support) and a different logstream.

Perhaps ironically, here's a link to the presentation PDF that was posted yesterday: https://news.ycombinator.com/item?id=21036020

And my comments there:

> What a good idea.

> Here's the hyperlinkified link to the {systemd-homed.service, systemd-userdbd.service, homectl, userdbctl} sources from the PDF: https://github.com/poettering/systemd/tree/homed

> Hadn't heard of varlink: https://varlink.org/

> Is there a FIPS-like subset of the most-widely-available LUKS configs? Otherwise home directories won't work on systems that have a limited set of LUKS modules.

[-]

Serverless: slower and more expensive

It'd be interesting to see how much this same workload would cost with e.g. OpenFaaS on k8s with autoscaling to zero; but there also you'd need to include maintenance costs like OS and FaaS stack upgrades. https://docs.openfaas.com/architecture/autoscaling/

[-]

Entropy can be used to understand systems

Maximum entropy: https://en.wikipedia.org/wiki/Maximum_entropy

Here's a quote of a tweet about a (my own): comment on a schema:BlogPost: https://twitter.com/westurner/status/1048125281146421249:

> “When Bayes, Ockham, and Shannon come together to define machine learning” https://towardsdatascience.com/when-bayes-ockham-and-shannon...

> Comment: "How does this relate to the Principle of Maximum Entropy? How does Minimum Description Length relate to Kolmogorov Complexity?"

[+]
[-]

New Query Language for Graph Databases to Become International Standard

Graph query languages are nice and all, but what about Linked Data here? Queries of schemaless graphs miss lots of data because without a schema this graph calls it "color" and that graph calls it "colour" and that graph calls it "色" or "カラー". (Of course this is also an issue even when there is a defined schema; but it's hardly possible to just happen to have comprehensible inter or even intra-organizational cohesion without e.g. RDFS and/or OWL and/or SHACL for describing (and changing) the shape of the data)

So, the task is then to compile schema-aware SPARQL to GQL or GraphQL or SQL or interminable recursive SQL queries or whatever it is.

For GraphQL, there's GraphQL-LD (which somewhat unfortunately contains a hashtag-indeterminate dash). I cite this in full here because it's very relevant to the GQL task at hand:

"GraphQL-LD: Linked Data Querying with GraphQL" (2018) https://comunica.github.io/Article-ISWC2018-Demo-GraphQlLD/

> GraphQL is a query language that has proven to be a popular among developers. In 2015, the GraphQL framework [3] was introduced by Facebook as an alternative way of querying data through interfaces. Since then, GraphQL has been gaining increasing attention among developers, partly due to its simplicity in usage, and its large collection of supporting tools. One major disadvantage of GraphQL compared to SPARQL is the fact that it has no notion of semantics, i.e., it requires an interface-specific schema. This therefore makes it difficult to combine GraphQL data that originates from different sources. This is then further complicated by the fact that GraphQL has no notion of global identifiers, which is possible in RDF through the use of URIs. Furthermore, GraphQL is however not as expressive as SPARQL, as GraphQL queries represent trees [4], and not full graphs as in SPARQL.

> In this work, we introduce GraphQL-LD, an approach for extending GraphQL queries with a JSON-LD context [5], so that they can be used to evaluate queries over RDF data. This results in a query language that is less expressive than SPARQL, but can still achieve many of the typical data retrieval tasks in applications. Our approach consists of an algorithm that translates GraphQL-LD queries to SPARQL algebra [6]. This allows such queries to be used as an alternative input to SPARQL engines, and thereby opens up the world of RDF data to the large amount of people that already know GraphQL. Furthermore, results can be translated into the GraphQL-prescribed shapes. The only additional requirement is their queries would now also need a JSON-LD context, which could be provided by external domain experts.

> In related work, HyperGraphQL [7] was introduced as a way to expose access to RDF sources through GraphQL queries and emit results as JSON-LD. The difference with our approach is that HyperGraphQL requires a service to be set up that acts as a intermediary between the GraphQL client and the RDF sources. Instead, our approach enables agents to directly query RDF sources by translating GraphQL queries client-side.

All of these RDFS vocabularies and OWL ontologies provide structure that minimizes the costs of merging and/or querying multiple datasets: https://lov.linkeddata.es/dataset/lov/

All of these schema.org/Dataset s in the "Linked Open Data Cloud" are easier to query than a schemaless graph: https://lod-cloud.net/ . Though one can query schemaless graphs with SPARQL, as well.

For reference, RDFLib has a bunch of RDF graph implementations over various key/value and SQL store backends. RDFLib-sqlachemy does query parametrization correctly in order to minimize the risk of query injection. FOR THE RECORD, SQL Injection is the CWE Top 25 #1 most prevalent security weakness; which is something that any new spec and implementation should really consider before launching anything other than an e.g. overly-verbose JSON-based query language that people end up bolting a micro-DSL onto. https://github.com/RDFLib/rdflib-sqlalchemy

Most practically, I frequently want to read a graph of objects into RAM; update, extend, and interlink; and then transactionally save the delta back to the store. This requires a few things: (1) an efficient binary serialization protocol like Apache Arrow (SIMD), Parquet, or any of the BSON binary JSONs; (2) a transactional local store that can be manually synchronized with the remote store until it's consistent.

SPARQL Update was somewhat of an out-of-scope afterthought. Here's SPARQL 1.1 Update: https://www.w3.org/TR/sparql11-update/

Here's SOLID, which could be implemented with SPARQL on GQL, too; though all the re-serialization really shouldn't be necessary for EAV triples with a named graph URI identifier: https://solidproject.org/

5 star data: PDF -> XLS -> CSV -> RDF (GQL, AFAIU (but with no URIs(!?))) -> LOD https://5stardata.info/en/

[+]

> Linked Data tends to live in a semantic web world that has a lot of open world assumptions. While there are a few systems like this out there, there aren't many. More practically focused systems collapse this worldview down into a much simpler model, and property graphs suit just fine.

Data integration is cost prohibitive. In n years time, the task is "Let's move all of these data silos into a data lake housed in our singular data warehouse; and then synchronize and also copy data around to efficiently query it in one form or another"

Linked data enables data integration from day one: enables the linking of tragically-silo'd records within disparate databases

There are very very many systems that share linked data. Some only label some of the properties with URIs in templates. Some enable federated online querying.

When you develop a schema for only one application implementation, you're tragically limiting the future value of the data.

> There's nothing wrong with enabling linked data use cases, but you don't need RDF+SPARQL+OWL and the like to do that.

Can you name a property graph use case that cannot be solved with RDFS and SPARQL?

> The "semantic web stack" I think has been shown by time and implementation experience to be an elegant set of standards and solutions for problems that very few real world systems want to tackle.

TBH, I think the problem is that people don't understand the value in linking our data silos through URIs; and so they don't take the time to learn RDFS or JSON-LD (which is pretty simple and useful for very important things like SEO: search engine result cards come from linked data embedded in HTML attributes (RDFa, Microdata) or JSON-LD)

The action buttons to 'RSVP', 'Track Package', anf 'View Issue' on Gmail emails are schema.org JSON-LD.

Applications can use linked data in any part of the stack: the database, the messages on the message queue, in the UI.

You might take a look at all of the use cases that SOLID solves for and realize how much unnecessary re-work has gone into indexing structs and forms validation. These are all the same app with UIs for interlinked subclasses of https://schema.org/Thing with unique inferred properties and aggregations thereof.

> In the intervening 2 full generations of tech development that have happened since a lot of those standards were born, some of the underlying stuff too (most particularly XML and XML-NS) went from indispensable to just plain irritating.

Without XSD, for example, we have no portable way to share complex fractions.

There's a compact representation of JSON-LD that minimizes record schema overhead (which gzip or lzma generally handle anyway)

https://lod-cloud.net is not a trivial or insignificant amount of linked data: there's real value in structuring property graphs with standard semantics.

Are our brains URI-labeled graphs? Nope, and we spend a ton of time talking to share data. Eventually, it's "well let's just get a spreadsheet and define some columns" for these property graph objects. And then, the other teams' spreadsheets have very similar columns with different labels and no portable datatypes (instead of URIs)

[+]
[+]
[+]

What was the vision?

The RDFJS "Comparison of RDFJS libraries" wiki page lists a number of implementations; though none for React or AngularJS yet, unfortunately. https://www.w3.org/community/rdfjs/wiki/Comparison_of_RDFJS_...

There's extra work to build general purpose frameworks for Linked Data. It may have been hard for any firm with limited resources to justify doing it the harder way (for collective returns)

Dokieli (SOLID (LDP,), WebID, W3C Web Annotations,) is a pretty cool - if deceptively simple-looking - showcase of what's possible with Linked Data; it just needs some CSS and a revenue model to pay for moderation. https://dokie.li/

> property graphs are demonstrably easier to work with for most use cases.

How do you see property graphs as distinct from RDF?

People build terrible apps without schema or validation and leave others to clean that up.

[+]

I added an answer in context to the comments on the answer you've linked but didn't add a link from the comments to the answer. Here's that answer:

> (in reply to the comments on this answer: https://stackoverflow.com/a/30167732 )

> When an owl:inverseOf production rule is defined, the inverse property triple is inferred by the reasoner either when adding or updating the store, or when selecting from the store. This is a "materialized relation"

> Schema.org - an RDFS vocabulary - defines, for example, https://schema.org/isPartOf as the inverse property of hasPart. If both are specified, it's not necessary to run another graph pattern query to traverse a directed relation in the other direction. (:book1 schema:hasPart ?o), (?o schema:isPartOf :book1), (?s schema:hasPart :chapter2)

> It's certainly possible to use RDFS and OWL to describe schema for and within neo4j property graphs; but there's no reasoner to e.g. infer inverse properties or do schema validation.

> Is there any RDF graph that neo4j cannot store? RDF has datatypes and languages for objects: you'd need to reify properties where datatypes and/or languages are specified (and you'd be re-implementing well-defined semantics)

> Can every neo4j graph be represented with RDF? Yes.

> RDF is a representation for graphs for which there are very many store implementations that are optimized for various use cases like insert and query performance.

> Comparing neo4j to a particular triplestore (with reasoning support) might be a more useful comparison given that all neo4j graphs can be expressed as RDF.

And then, some time later, I realize that I want/need to: (3) apply production rules to do inference at INSERT/UPDATE/DELETE time or SELECT time (and indicate which properties were inferred (x is a :Shape and a :Square, so x is also a :Rectangle; x is a :Rectangle and :width and :height are defined, so x has an :area)); (4) run triggers (that execute code written in a different language) when data is inserted, updated, modified, or linked to; (5) asynchronously yield streaming results to message queue subscribers who were disconnected when the cached pages were updated

[-]

A Python Interpreter Written in Python

What an excellent 500 lines introduction to the byterun bytecode interpreter / virtual machine: https://github.com/nedbat/byterun

Also, proceeds from optional purchases of the AOSA books go to Amnesty International. https://aosabook.org/

[-]

Reinventing Home Directories – systemd-homed [pdf]

What a good idea.

Here's the hyperlinkified link to the {systemd-homed.service, systemd-userdbd.service, homectl, userdbctl} sources from the PDF: https://github.com/poettering/systemd/tree/homed

Hadn't heard of varlink: https://varlink.org/

Is there a FIPS-like subset of the most-widely-available LUKS configs? Otherwise home directories won't work on systems that have a limited set of LUKS modules.

[-]

Weld: Accelerating numpy, scikit and pandas as much as 100x with Rust and LLVM

There's also RustPython, a Rust implementation of CPython 3.5+: https://news.ycombinator.com/item?id=20686580

> https://github.com/RustPython/RustPython

[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[-]

Craftsmanship–The Alternative to the 4 Hour Work Week

> To be successful over the course of a career requires the application and accumulation of expertise. This assumes that for any given undertaking you either provide expertise or you are just a bystander. It’s the experts that are the drivers — an expertise that is gained from a curiosity, and a mindset of treating one’s craft very seriously.

[-]

Solar and Wind Power So Cheap They’re Outgrowing Subsidies

[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]

No, you're splitting hairs.

There are direct and indirect subsidies. Indirect subsidies include externalities: external costs paid by everyone else (that the government should be incentivizing reductions in by requiring the folks causing them to pay)

Semantic digressions aside, they're earning while everyone else pays costs resultant from their operations (and from our apparent inability to allocate with e.g. long term security, health, and prosperity as primary objectives for the public sphere)

[+]
[+]
[+]
[+]
[+]

"subsidies" includes both direct and indirect subsidies.

We can measure direct subsidies by measuring real and effective tax rates.

We can measure indirect subsidies like healthcare costs paid by Medicare with subjective valuations of human life and rough estimates of the value of a person's health and contribution to growth in GDP, and future economic security.

But who has the time for this when we're busy paying to help folks who require disaster relief services from the government and NGOs (neither of which are preventing further escalations in costs)

[+]
[-]

Show HN: Python Tests That Write Themselves

[+]
[+]

pytype (Google) [1], PyAnnotate (Dropbox) [2], and MonkeyType (Instagram) [3] all do dynamic / runtime PEP-484 type annotation type inference [4]

[1] https://github.com/google/pytype

[2] https://github.com/dropbox/pyannotate

[3] https://github.com/Instagram/MonkeyType

[4] https://news.ycombinator.com/item?id=19454411

[-]

Most Americans see catastrophic weather events worsening

The stratifications on this are troubling.

> But there are wide differences in assessments by partisanship. Nine in 10 Democrats think weather disasters are more extreme, compared with about half of Republicans.

It's not a partisan issue: we all pay these costs.

> Majorities of adults across demographic groups think weather disasters are getting more severe, according to the poll. College-educated Americans are slightly more likely than those without a degree to say so, 79 percent versus 69 percent.

Weather disasters are getting more severe. It is objectively, quantitatively true that weather disasters are getting more frequent and more severe.

[+]

> Source? What definitions are being used for severity? How is the sample of events selected? Is there a statistically-significant effect or might it be random variation?

These are great questions that any good skeptic / data scientist should always be asking. Here are some summary opinions based upon meta analyses with varyingly stringent inclusion criteria.

( I had hoped that the other top-level post I posted here would develop into a discussion, but these excerpts seem to have bubbled up. https://news.ycombinator.com/item?id=20919368 )

"Scientific consensus on climate change" lists concurring, non-commital, and opposing groups of persons with and without conflicting interests: https://en.wikipedia.org/wiki/Scientific_consensus_on_climat...

USGCRP, "2017: Climate Science Special Report: Fourth National Climate Assessment, Volume I" [Wuebbles, D.J., D.W. Fahey, K.A. Hibbard, D.J. Dokken, B.C. Stewart, and T.K. Maycock (eds.)]. U.S. Global Change Research Program, Washington, DC, USA, 470 pp, doi: 10.7930/J0J964J6.

"Chapter 8: Droughts, Floods, and Wildfire" https://science2017.globalchange.gov/chapter/8/

"Chapter 9: Extreme Storms" https://science2017.globalchange.gov/chapter/9/

"Appendix A: Observational Datasets Used in Climate Studies" https://science2017.globalchange.gov/chapter/appendix-a/

The key findings in this report do list supporting evidence and degrees of confidence in predictions about the frequency and severity of severe weather events.

I'll now proceed to support the challenged claim that disaster severity and frequency are increasing by citing disaster relief cost charts which do not directly support the claim. Unlike your typical televised debate or congressional session, I have: visual aids, a computer, linked to the sources I've referenced. Finding the datasets ( https://schema.org/Dataset ) for these charts may be something that someone has time for while the costs to taxpayers and insurance holders are certainly increasing for a number of reasons.

"Taxpayer spending on U.S. disaster fund explodes amid climate change, population trends" (2019) has a nice chart displaying "Disaster-relief appropriations, 10-year rolling median" https://www.washingtonpost.com/us-policy/2019/04/22/taxpayer...

"2018's Billion Dollar Disasters in Context" includes a chart from NOAA: "Billion-Dollar Disaster Event Types by Year (CPI-Adjusted)" with the title embedded in the image text - which I searched for - and eventually found the source of: [1] https://www.climate.gov/news-features/blogs/beyond-data/2018...

[1] "Billion-Dollar Weather and Climate Disasters: Time Series" (1980-2019) https://www.ncdc.noaa.gov/billions/time-series

[+]
[+]

The article seems to have focused on perceptions of persons who aren't concerned with taking an evidence-based look (at various types of storms: floods, cyclones (i.e. hurricanes), severe thunderstorms, windstorms. Regardless, costs are increasing. I've listed a few sources here: https://news.ycombinator.com/item?id=20925127

"2017: Climate Science Special Report: Fourth National Climate Assessment, Volume I" > "Chapter 9: Extreme Storms" lists a number of relevant Key Findings with supporting evidence (citations) and degrees of confidence: https://science2017.globalchange.gov/chapter/9/

[+]

"Billion-Dollar Weather and Climate Disasters: Time Series" (1980-2019) https://www.ncdc.noaa.gov/billions/time-series

[+]

> My guess is you and I see very different things despite consuming the very same article. I lean conservative/libertarian (generally speaking),

HN specifically avoids politics. In context to the in-scope article, when you say "conservative/libertarian" do you mean: fiscally conservative (haven't seen a deficit hawk in decades other than "Read my lips. No new taxes" followed by responsibly raising taxes), socially libertarian (Liberty as a fundamental right; if you're not violating the rights of others the government is not obligated or even granted the right to intervene at all), or conservative as in imposing your particular traditional standard of moral values which you believe are particular to a particular side of the aisle?

Or, do you mean that you're libertarian in regards to the need and the right to regulate business and industry in the interest of consumers ("laissez faire")? I'm certainly not the only person to observe that lack of regulation results in smog-filled cities due to un-costed 'externalities' in a blind pursuit of optimization for short-term profit.

At issue here, I think, is whether we think we can avert future escalations of costs by banding together to address climate change now; and how best to achieve the Paris Agreement targets that we set for ourselves (despite partisan denial, delusion, and indifference to increasing YoY costs [1]) https://en.wikipedia.org/wiki/Paris_Agreement

I'm personally and financially far more concerned about the long-term costs of climate change than a limited number of special interests who can very easily diversify and/or divest to take advantage of the exact same opportunities.

> and I am deeply distrustful of government (for extremely good reasons I believe), so I know for a fact that my interpretation of the article is going to be heavily distorted by that. Any logical inconsistency, ambiguousness, disingenuousness, technical dishonesty, or anything else along those lines is going to get red flagged in my mind, whereas others will read it in a much more forgiving fashion. And in an article on a different political hot topic, we will switch our behaviors.

While governments (and militaries (TODO)) do contribute substantially to emissions and resultant climate change, I think it unnecessary to qualify that unregulated decisions by industry should be the primary focus here. Industry has done far more to cause climate change than governments (which can more efficiently provide certain services useful to all citizens)

> In such threads, I think it would be extremely interesting for people with opposing views to post excerpts of the parts that "catch your attention", with an explanation of why. This is kind of what happens anyway, but I'm thinking with a completely different motive: rather than quoting excerpts with commentary to argue your ~political side of the issue with the goal of "winning the argument", take an unemotional, more abstract view of your personal cognitive processing of the article,

These people aren't doing jack about the problem because they haven't reviewed this chart: "Billion-Dollar Weather and Climate Disasters: Time Series" (1980-2019) https://www.ncdc.noaa.gov/billions/time-series

Maybe they want insurance payouts, which result in higher premiums. Maybe the people who built in those locations should be paying the costs.

> and post commentary on ~why/how you believe you feel you consider that important on a psychological level. Psychological self-analysis is famously difficult, but even with moderate success I suspect some very interesting things would rise to the surface.*

They don't even care because they refuse to accept that it's a problem.

The article was ineffectual at addressing the very real problem.

From https://news.ycombinator.com/item?id=20925127 :

> ( I had hoped that the other top-level post I posted here would develop into a discussion, but these excerpts seem to have bubbled up. https://news.ycombinator.com/item?id=20919368 )

In this observational study of perceptions, college education was less predictive than party affiliation.

Maybe reframing this as a short-term money problem [1] would result in compassion for people who are suffering billions of dollars of loss every year.

[+]
[+]

> "2017: Climate Science Special Report: Fourth National Climate Assessment, Volume I" > "Chapter 9: Extreme Storms" lists a number of relevant Key Findings with supporting evidence (citations) and degrees of confidence: https://science2017.globalchange.gov/chapter/9/

How about a link to a chart indicating frequency and severity of severe weather events?

The Paris Agreement is predicated upon the link between human actions, climate change, and severe weather events. 195 countries have signed the Paris Agreement with consensus that what we're doing is causing climate change.

Here are some climate-relevant poll questions:

Do you think the costs of disaster relief will continue to increase due to frequency and severity of severe weather events?

Does it make sense to spend more on avoiding further climate change now rather than even more on disaster relief later?

How can you help climate refugees? Do you donate to DoD and National Guards? Do you donate to NGOs? How can we get better at handling more frequent and more severe disasters?

[-]

Emergent Tool Use from Multi-Agent Interaction

gdb | 2019-09-17 12:00:54 | 332 | # | ^

I, for one, really appreciate the raytracing in these visualizations. I wish for more box surfing examples.

[-]

Inkscape 1.0 Beta 1

`Ctrl + 4` to center view on page!

Pressure sensitive pencil for the PowerStroke Live Path Effect (LPE) "if a pressure sensitive device is available"!

[-]

Where Dollar Bills Come From

The 1914 $10 Dollar Bill was printed on hemp paper. Today, they're worth like $49.99. IDK how steady that price is over time; relative to the prices of other CPI All goods.

[+]
[+]
[-]

Monetary Policy Is the Root Cause of the Millennials’ Struggle

Volatility works out for people who save (who park capital in liquid assets that aren't doing work in order to have wheat for the eventual famine). These guys. They save, short like heck when the market is falling, and swoop in to save the day. What a great time to be selling 0% loans.

Personal Savings Rate (PSR) stratified by greatest generation and not greatest generation is also relevant. Are relatively fixed living expenses higher now? Yes. Is my generation just blowing what they could invest into interest-bearing investments on unnecessary stuff from Amazon? Yes. And expensive meals and drinks.

How have corporate profits and wages changed?

In their day, you put you gosh-danged money aside. For later. So that you have money later.

And that is why you should buy my book, entitled: "Invest in things with long term returns: don't buy shtuff you don't f need, save for tomorrow; and other financial advice"

Which brings me to: the cost of college textbooks and a college education in terms of average hourly wages.

By the way, over the longer term, index funds are likely to outperform funds. Gold may be likely to outperform the stock market. And, over the recent term -- this is for all you suckers out there -- cryptocurrencies have outperformed all stock and commodities markets. How much total wealth is being created on an annual basis here?

Payday loans have something like 300% APY.

How does 2% inflation affect trade when other central banking cabals haven't chosen the same target? "Devaluation"! "Treachery"!

[-]

Non-root containers, Kubernetes CVE-2019-11245 and why you should care

> At the same time, all the current implementations of rootless containers rely on user namespaces at their core. Not to be confused with what is referred to as non-root containers in this article, rootless containers are containers that can be run and managed by unprivileged users on the host. While Docker and other runtimes require a daemon running as root, rootless containers can be run by any user without additional capabilities.

non-root / rootless

[-]

How do black holes destroy information and why is that a problem?

[+]

"Why Quantum Information is Never Destroyed" re: determinism and T-Symmetry ("time-reversal symmetry") by PBS SpaceTime https://youtu.be/HF-9Dy6iB_4

Classical information is 'collapsed' quantum information, so that would mean that classical information is never lost either.

There appear to be multiple solutions for Navier-Stokes; i.e. somewhat chaotic.

If white holes are on the other side of black holes, Hawking radiation would not account for the entirety of the collected energy/information. Is our visible universe within a white hole? Is everything that's ever been embedded in the sidewall of a black hole shredder?

Maybe even recordings of dinosaurs walking; or is that lemurs walking in reverse?

Do 1/n, 1/∞, and n/∞ approach a symbolic limit where scalars should not be discarded; with piecewise operators?

[-]

Banned C standard library functions in Git source code

FWIW, here's awesome-static-analysis > Programming Languages > C/C++: https://github.com/mre/awesome-static-analysis/blob/master/R...

These tools have lists of functions not to use. Most of them — at least the security-focused ones — likely also include: strcpy, strcat, strncpy, strncat, sprints, and vsprintf just like banned.h

[-]

Ask HN: What's the hardest thing to secure in a web-app?

"OWASP Top 10 Most Critical Web Application Security Risks" https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Proje...

> A1:2017-Injection, A2:2017-Broken Authentication, A3:2017-Sensitive Data Exposure, A4:2017-XML External Entities (XXE), A5:2017-Broken Access Control, A6:2017-Security Misconfiguration, A7:2017-Cross-Site Scripting (XSS), A8:2017-Insecure Deserialization, A9:2017-Using Components with Known Vulnerabilities, A10:2017-Insufficient Logging&Monitoring

"OWASP Top 10 compared to SANS CWE 25" https://www.templarbit.com/blog/2018/02/08/owasp-top-10-vs-s...

[-]

Crystal growers who sparked a revolution in graphene electronics

> This seven-metre-tall machine can squeeze carbon into diamonds

OT but, is this a thing now? Diamonds can be entangled.

[+]

Does it take more energy than mining for diamonds?

> Quantum Entanglement Links 2 Diamonds: Usually a finicky phenomenon limited to tiny, ultracold objects, entanglement has now been achieved for macroscopic diamonds at room temperature (2011) https://www.scientificamerican.com/article/room-temperature-...

[+]
[-]

Things to Know About GNU Readline

I map <up> to history-search-backward in my .inputrc; so I can type 'sudo ' and press <up> to cycle through everything starting with sudo:

    #  <up>      -- history search backward (match current input)
    "\e[A": history-search-backward
    #  <down>    -- history search forward (match current input)
    "\e[B": history-search-forward
https://github.com/westurner/dotfiles/blob/develop/etc/.inpu...

[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]
[+]

Is this macro from the article dangerous because it doesn't quote the argument?

  Control-j: "\C-a$(\C-e)"
I can never remember how expansion and variable substitution work in shells.

[+]

Yeah, but this and this do different things:

  # prints a newline
  echo $(echo "-e a\nb")

  # prints "-e a\nb"
  echo "$(echo "-e a\nb")"

[-]

Show HN: Termpage – Build a webpage that behaves like a terminal

This looks useful.

FWIW, you can build a curses-style terminal GUI with Urwid (in Python) and use that through the web. AFAIU, it requires Apache; but it's built on Tornado (which is now built on Asyncio) so something more lightweight than Apache on a Pi should definitely be doable. Termpage with like a Go or Rust REST API may still be more lightweight, but more work.

[-]

Vimer - Avoid multiple instances of GVim with gvim –remote[-tab]-silent wrapper

I have a shell script I named 'e' (for edit) that does basically this. If VIRTUAL_ENV_NAME is set (by virtualenvwrapper), e opens a new tab in that gui vim remote if gvim or macvim are on PATH, or just in a console vim if not. https://github.com/westurner/dotfiles/blob/develop/scripts/e

'editwrd'/'ewrd'/'ew' does tab-completion relative to whatever $_WRD (working directory) is set to (e.g. by venv) and calls 'e' with that full path: https://github.com/westurner/dotfiles/blob/develop/scripts/_...

It's unfortunately not platform portable like vimer, though.

[-]

Electric Dump Truck Produces More Energy Than It Uses

What a cool use of gravitational potential energy. It would be interesting to learn how much more energy is produced by the regenerative breaking system on the downhill and whether they use the excess to load the truck?

[-]

Ask HN: Let's make an open source/free SaaS platform to tackle school forms

I have 4 kids. I am filling out all the start of school forms for each kid. I have to fill out these same forms each year. Are you doing the same thing? Let's make this year the last year we are manually filling out forms -- let's build a SaaS platform for school forms. Community built, open-sourced, free.

Brief sketch of the idea: survey monkey + docusign, but with a 100 pre-built templates for K-12 school situations. Medical emergency form. Carpool form. Field trip permission form. Backend gives schools an easy way to customize and track forms. Forms are emailed to parents and filled out online. Parent's information is saved so that any new form is pre-filled in with as much known info as possible.

Anyone feeling the same pain? Anyone want to join with me and do it?

Technically, a checkbox may qualify as a digital signature; however, identification / authentication and storage integrity are fairly challengeable (just as a written signature on a piece of paper with a date written on it is challengeable)

Given that notarization is not required for parental consent forms, I'm not sure what sort of server security expense is justified or feasible.

How much does processing all of the paper forms cost each school? Per-student?

In terms of storing digital record of authorization, a private set of per-student OpenBadges with each OpenBadge issued by the school would be easy enough. W3C Verified Claims (and Linked Data Signatures) are the latest standards for this sort of thing.

We could evaluate our current standards for chain of custody in regards to the level of trust we place in commercial e-signature platforms.

The school could send home a sheet with a QR code and a shorturl, but that would be more expensive than running hundreds of copies of the same sheet of paper.

The school could require a parent or guardian's email address for each student in the SIS Student Information System and email unique links to prefilled forms requesting authorization(s).

Just as with e-Voting, assuring that the person who checks a checkbox or tries to scribble their signature with a mouse or touchscreen is the authorized individual may be more difficult than verifying that a given written signature is that of the parent or guardian authorized to authorize.

AFAIU, Google Forms for School can include the logged-in user's username; but parents don't have school domain accounts with Google Apps for Education or Google Classroom.

How would the solution integrate with schools' existing SIS (Student Information Systems)? Upload a CSV of (student, {student info}, {guardian email (s)})? This is private information that deserves security, which costs money.

Which users can log-in for the school and/or district to check the state of the permission / authorization requests and PII personally-identifiable information.

While cryptographic signatures may be overkill as a substitute for permission slips, FWIW, a timestamp within a cryptographically-signed document only indicates what the local clock was set to at the time. Blockchains have relatively indisputable timestamps ("certainly no later than the time that the tx made it into a block"), but blockchains don't solve for proving the key-person relation at a given point in time.

And also, my parent or guardian said you can take me on field trips if you want. https://backpack.openbadges.org/

[-]

Ask HN: Is there a CRUD front end for databases (especially SQLite)?

I'm currently looking for a program (a simple executable) that "opens" an SQLite database and (via introspection of the schema) without any further configuration allows simple CRUD operations on the database.

Yes, there is DB Browser and a gazillion other database administration frontends, but it should really be limited to CRUD operations. No changing the table, the schema, the indexes. Simple UI.

For users that have no idea about SQL or databases.

Is there anything like that already done and ready to use?

There are lots of apps that do database introspection. Some also generate forms on the fly, but eventually it's necessary to: specify a forms widget for a particular field because SQL schema only describes the data and not the UI; and specify security authorization restrictions on who can create, read, update, or delete data.

And then you want to write arbitrary queries to filter on columns that aren't indexed; but it's really dangerous to allow clients to run arbitrary SQL queries because there basically are no row/object-level database permissions (the application must enforce row-level permissions).

Datasette is a great tool for read-only database introspection and queries of SQLite databases. https://github.com/simonw/datasette

Sandman2 generates a REST API for an arbitrary database. https://github.com/jeffknupp/sandman2

You can generate Django models and then write admin.py files for each model/table that you want to expose in the django.contrib.admin interface.

There are a number of apps for providing a GraphQL API given introspection of a database that occurs at every startup or at runtime; but that doesn't solve for row-level permissions (or web forms)

If you have an OpenAPI spec for the REST API that runs atop The database, you can generate forms ("scaffolding") from the OpenAPI spec and then customize those with form widgets; optionally with something like json-schema.

It's not safe to allow introspected CRUD like e.g. phpMyAdmin for anything but development. If there are no e.g. foreign-key constraints specified in the SQL schema,a blindly-introspected UI very easily results in database corruption due to invalid foreign key references (because the SQL schema doesn't specify what table.column a foreign key references).

Django models, for example, unify SQL schema and forms UI in models.py; admin.py is optional but really useful for scaffolding (such as when you're doing manual testing because you haven't yet written automated tests) https://docs.djangoproject.com/en/2.2/ref/contrib/admin/#mod...

[-]

California approves solar-powered EV charging network and electric school buses

> The press release from the company said, “heavy-duty vehicles produce more particulate matter than all of the state’s power plants combined”.

> […] for instance why only “10 school buses”?

IARC has recognized diesel exhaust as carcinogenic (lung cancer) since 2012.

Are there other electric school bus programs in the US?

(edit)

https://www.trucks.com/2019/03/22/can-electric-school-buses-...

> Most school systems don’t have sufficient capital to finance the high initial costs of electric bus purchases and charging infrastructure development, he said.

> In the U.S., the school bus market is about 33,000 to 35,000 vehicles per year – about six times more than transit buses.

[-]

You May Be Better Off Picking Stocks at Random, Study Finds

[+]

In addition to diversification that reduces risk of overexposure to down sectors or typically over-performing assets, index funds have survivorship bias: underperforming assets are replaced by assets that meet the fund's criteria.

[+]
[-]

Root: CERN's scientific data analysis framework for C++

[+]

> With frameworks like Python pandas, you always end up having to manually partition your data if it doesn’t fit in memory.

"Pandas Docs > Pandas Ecosystem > Out of Core" lists a number of solutions for working with datasets that don't fit into RAM: Blaze, Dask, Dask-ML (dask-distributed; Scikit-Learn, XGBoost, TensorFlow), Koalas, Odo, Ray, Vaex https://pandas-docs.github.io/pandas-docs-travis/ecosystem.h...

The dask API is very similar to the pandas API.

Are there any plans for ROOT to gain support for Apache Parquet, and/or Apache Arrow zero-copy reads and SIMD support, and/or https://RAPIDS.ai (Arrow, numba, Dask, pandas, scikit-learn, XGboost, spark, CUDA-X GPU acceleration, HPC)? https://arrow.apache.org/

https://root.cern.ch/root-has-its-jupyter-kernel (2015)

> Yet another milestone of the integration plan of ROOT with the Jupyter technology has been reached: ROOT now offers a Jupyter kernel! You can try it already now.

> ROOT is the 54th entry in this list and this is pretty cool. Now not only the PyROOT, the ROOT Python bindings, are integrated with notebooks but it's also possible to express your data mining in C++ within a notebook, taking advantage of all the powerful features of ROOT - plotting (now also interactive thanks to (Javascript ROOT](https://root.cern.ch/js/)), multivariate analysis, linear algebra, I/O and reflection: all available within a notebook.

Does this work with JupyterLab now? (edit) Here's the JupyterLab extension developer guide: https://jupyterlab.readthedocs.io/en/stable/developer/extens... (edit) here's the gh issue: https://github.com/root-project/jsroot/issues/166

...

ROOT is now installable with conda: `conda install -c conda-forge root metakernel jupyterlab # notebook`

[+]
[+]
[+]
[+]
[-]

MesaPy: A Memory-Safe Python Implementation based on PyPy (2018)

[+]

> Since then, I’ve found RustPython [0] which is progressing toward feature parity with CPython but entirely written in Rust (!). A side benefit is that it compiles to Web Assembly, so if you could sandbox it without too much extra overhead.

It's now possible to run JupyterLab entirely within a browser with jyve (JupyterLab + pyodide) https://github.com/iodide-project/pyodide/issues/431

Pyodide:

> Pyodide brings the Python runtime to the browser via WebAssembly, along with the Python scientific stack including NumPy, Pandas, Matplotlib, parts of SciPy, and NetworkX. The packages directory lists over 35 packages which are currently available.

Is the RustPython WASM build more performant or otherwise preferable to brython or pyodide?

[-]

Ask HN: Configuration Management for Personal Computer?

Hello HN,

Every couple of years I find myself facing the same old tired routine: migrating my stuff off some laptop or desktop to a new one, usually combined with an OS upgrade. Is there anything like the kind of luxuries we now consider normal on the server side (IaaS; Terraform; maybe Ansible) that can be used to manage your PC and that would make re-imaging it as easy as it is on the server side?

Ansible is worth the extra few minutes, IMHO.

+ (minimal) Bootstrap System playbook

+ Complete System playbook (that references group_vars and host_vars)

+ Per-machine playbooks stored alongside the ansible inventory, group_vars, and host_vars in a separate repo (for machine-specific kernel modules and e.g. touchpad config)

+ User playbook that calls my bootstrap dotfiles shell script

+ Bootstrap dotfiles shell script, which creates symlinks and optionally installs virtualenv+virtualenvwrapper, gitflow and hubflow, and some things with pipsi. https://github.com/westurner/dotfiles/blob/develop/scripts/b...

+ setup_miniconda.sh that creates a CONDA_ROOT and CONDA_ENVS_PATH for each version of CPython (currently py27-py37)

Over the years, I've worked with Bash, Fabric, Puppet, SaltStack, and now Ansible + Bash

I log shell commands with a script called usrlog.sh that creates a $USER and per-virtualenv tab-delimited logfiles with unique per-terminal-session identifiers and ISO8601 timestamps; so it's really easy to just grep for the apt/yum/dnf commands that I ran ad-hoc when I should've just taken a second to create an Ansible role with `ansible-galaxy init ansible-role-name ` and referenced that in a consolidated system playbook with a `when` clause. https://westurner.github.io/dotfiles/usrlog.html#usrlog

A couple weeks ago I added an old i386 netbook to my master Ansible inventory and system playbook and VScode wouldn't install because VScode Linux is x86-64 only and the machine doesn't have enough RAM; so I created when clauses to exclude VScode and extensions on that box (with host_vars). Gvim with my dotvim works great there too though. Someday I'll merge my dotvim with SpaceVim and give SpaceMacs a try; `git clone; make install` works great, but vim-enhanced/vim-full needs to be installed with the system package manager first so that the vimscript plugin installer works and so that the vim binary gets updated when I update all.

I've tested plenty of Ansible server configs with molecule (in docker containers), but haven't yet taken the time to do a full workstation build with e.g. KVM or VirtualBox or write tests with testinfra. It should be easy enough to just run Ansible as a provisioner in a Vagrantfile or a Packer JSON config. VirtualBox supports multi-monitor VMs and makes USB passthrough easy, but lately Docker is enough for everything but Windows (with a PowerShell script that installs NuGet packages with chocolatey) and MacOS (with a few setup scripts that download and install .dmg's and brew) VMs. Someday I'll write or adapt Ansible roles for Windows and Mac, too.

I still configure browser profiles by hand; but it's pretty easy because I just saved all the links in my tools doc: https://westurner.github.io/tools/#browser-extensions

Someday, I'll do bookmarks sync correctly with e.g. Chromium and Firefox; which'll require extending westurner/pbm to support Firefox SQLite or a rewrite in JS with the WebExtension bookmarks API.

A few times, I've decided to write docs for my dotfiles and configuration management policies like someone else is actually going to use them; it seemed like a good exercise at the time, but invariably I have to figure out what the ultimate command sequence was and put that in a shell script (or a Makefile, which adds a dependency on GNU make that's often worth it)

Clonezilla is great and free, but things get out of date fast in a golden master image. It's actually possible to PXE boot clonezilla with Cobbler, but, AFAICT, there's no good way to secure e.g. per-machine disk or other config with PXE. Apt-cacher-ng can proxy-cache-mirror yum repos, too. Pulp requires a bit of RAM but looks like a solid package caching system. I haven't yet tested how well Squid works as a package cache when all of the machines are simultaneously downloading the exact same packages before a canary system (e.g. in a VM) has populated the package cache.

I'm still learning to do as much as possible with Docker containers and Dockerfiles or REES (Reproducible Execution Environment Specifications) -compatible dependency configs that work with e.g. repo2docker and https://mybinder.org/ (BinderHub)

[-]

GitHub Actions now supports CI/CD, free for public repositories

[+]
[+]
[+]
[+]

You can create a separate repo with your own CI config that pulls in the code you want to test; and thus ignore the code's CI config file. When something breaks, you'd then need to determine in which repo something changed: in the CI config repo, or the code repo. And then, you have CI events attached to PRs in the CI config repository.

IMHO it makes sense to have CI config version controlled in the same repo as the code. Unless there's a good tool for bisecting across multiple repos and subrepos?

[-]

The Fed is getting into the Real-Time payments business

apo | 2019-08-05 17:19:30 | 96 | # | ^

This system will need to interface with other domestic and international settlement and payments networks.

There is thus an opportunity for standards, a need for federation, and a need to make it easy for big players to offer liquidity.

As far as I understand, e.g. Ripple and Stellar solve basically exactly the 24x7x365 RTGS problem that FedNow intends to solve; and, they allow all sorts of assets to be plugged into the network. Could FedNow just use a different UNL (Unique Node List) with participating banks operating trusted validators and/or offering liquidity ("liquidity provisioning")?

Notably, Ripple is specifically positioned to do international interbank real time gross settlement (RTGS) and remittances. Ripple could integrate with FedNow directly. Most efficiently, if it complies with KYC/AML requirements, FedNow could operate an XRP Ledger. Or, each bank could operate XRP Ledgers. https://xrpl.org/become-an-xrp-ledger-gateway.html

Getting thousands of banks to comply with an evolving API / EDI spec is no small task. Blockchain solutions require API compliance, have solutions for governance where there are a number of stakeholders seeking to reach consensus, and lack single points of failure.

Here's to hoping that we've learned something about decentralizing distributed systems for resiliency.

>> In contrast, the XRP Ledger requires 80 percent of validators on the entire network, over a two-week period, to continuously support a change before it is applied. Of the approximately 150 validators today, Ripple runs only 10. Unlike Bitcoin and Ethereum — where one miner could have 51 percent of the hashing power — each Ripple validator only has one vote in support of an exchange or ordering a transaction. https://news.ycombinator.com/item?id=19195050

So, you want to get banks onboard with only one s'coin USD stablecoin; but you don't want to deal with exchanges or FOREX or anything because that's a different thing? And, this is not just yet another ACH with lower clearance time?

> Interledger Architecture

https://interledger.org/rfcs/0001-interledger-architecture/

> Interledger provides for secure payments across multiple assets on different ledgers. The architecture consists of a conceptual model for interledger payments, a mechanism for securing payments, and a suite of protocols that implement this design.

> The Interledger Protocol (ILP) is the core of the Interledger protocol suite. Colloquially, the whole Interledger stack is sometimes referred to as "ILP". Technically, however, the Interledger Protocol is only one layer in the stack.

> Interledger is not a blockchain, a token, nor a central service. Interledger is a standard way of bridging financial systems. The Interledger architecture is heavily inspired by the Internet architecture described in RFC 1122, RFC 1123 and RFC 1009.

[...]

> You can envision the Interledger as a graph where the points are individual nodes and the edges are accounts between two parties. Parties with only one account can send or receive through the party on the other side of that account. Parties with two or more accounts are connectors, who can facilitate payments to or from anyone they're connected to.

> Connectors provide a service of forwarding packets and relaying money, and they take on some risk when they do so. In exchange, connectors can charge fees and derive a profit from these services. In the open network of the Interledger, connectors are expected to compete among one another to offer the best balance of speed, reliability, coverage, and cost.

Why should we prefer an immutable, cryptographically-signed blockchain solution over SQL/BigTable/MQ for FedNow?

Blockchain and payments standards: https://news.ycombinator.com/item?id=19813340

... Here's the notice and request for comment PDF: "Docket No. OP – 1670: Federal Reserve Actions to Support Interbank Settlement of Faster Payments" https://www.federalreserve.gov/newsevents/pressreleases/file...

"Federal Reserve announces plan to develop a new round-the-clock real-time payment and settlement service to support faster payments" https://www.federalreserve.gov/newsevents/pressreleases/othe...

[-]

A Giant Asteroid of Gold Won’t Make Us Richer

> this example shows that real wealth doesn’t actually come from golden hoards. It comes from the productive activities of human beings creating things that other human beings desire.

Value, Price, and Wealth

[+]

Good call. I don't know where I was going with that. Cost, price, value, and wealth.

Are there better examples for illustrating the differences between these kind of distinct terms?

Less convertible collectibles like coins and baseball cards (that require energy for exchange) have (over time t): costs of production, marketing, and distribution; retail sales price; market price; and 'value' which is abstract relative (opportunity cost in terms of fiat currency (which is somehow distinct from price at time t (possibly due to 'speculative information')))

Wealth comes from relationships, margins between costs and prices, long term planning, […]

[+]
[-]

Abusing the PHP Query String Parser to Bypass IDS, IPS, and WAF

[+]
[+]

Possible solutions:

(1) Change all underscores in WAF rule URL attribute names to the appropriate non-greedy regex. Though I'm not sure about the regex the article suggests: '.' only matches one character, AFAIU.

(2) Add a config parameter to PHP that turns off the magical url parameter name mangling that no webapp should ever depend on ( and have it default to off because if you rely on this 'feature' you should have to change a setting in php.ini anyway )

[-]

Ask HN: Scripts/commands for extracting URL article text? (links -dump but)

I'd like to have a Unix script that basically generates a text file named, with the page title, with the article text neatly formatted.

This seems to me to be something that would be so commonly desired by people that it would've been done and done and done a hundred times over by now, but I haven't found the magic search terms to dig up people's creations.

I imagine it starts with "links -dump", but then there's using the title as the filename, and removing the padded left margin, wrapping the text, and removing all the excess linkage.

I'm a beginner-amateur when it comes to shell scripting, python, etc. - I can Google well and usually understand script or program logic but don't have terms memorized.

Is this exotic enough that people haven't done it, or as I suspect does this already exist and I'm just not finding it? Much obliged for any help.

[+]

There could be collisions where `fname2` is the same for different pages; resulting in unintentionally overwriting. A couple possible solutions: generate a random string and append it to the filename, set fname2 to a hash of the URL, replace unsafe filename characters like '/' and/or '\' and/or '\n' with e.g. underscores. IIRC, URLs can be longer than the max filename length of many filesystems, so hashes as filenames are the safest solution. You can generate an index of the fetched URLs and store it with JSON or e.g. SQLite (with Records and/or SQLAlchemy, for example).

If or when you want to parallelize (to do multiple requests at once because most of the time is spent waiting for responses from the network) write-contention for the index may be an issue that SQLite solves for better than a flatfile locking mechanism like creating and deleting an index.json.lock. requests3 and aiohttp-requests support asyncio. requests3 supports HTTP/2 and connection pooling.

SQLite can probably handle storing the text of as many pages as you throw at it with the added benefit of full-text search. Datasette is a really cool interface for sqlite databases of all sorts. https://datasette.readthedocs.io/en/stable/ecosystem.html#to...

...

Apache Nutch + ElasticSearch / Lucene / Solr are production-proven crawling and search applications: https://en.m.wikipedia.org/wiki/Apache_Nutch

> I imagine it starts with "links -dump", but then there's using the title as the filename,

The title tag may exceed the filename length limit, be the same for nested pages, or contain newlines that must be escaped.

These might be helpful for your use case:

"Newspaper3k: Article scraping & curation" https://github.com/codelucas/newspaper

lazyNLP "Library to scrape and clean web pages to create massive datasets" https://github.com/chiphuyen/lazynlp/blob/master/README.md#s...

scrapinghub/extruct https://github.com/scrapinghub/extruct

> extruct is a library for extracting embedded metadata from HTML markup.

> It also has a built-in HTTP server to test its output as JSON.

> Currently, extruct supports:

> - W3C's HTML Microdata

> - embedded JSON-LD

> - Microformat via mf2py

> - Facebook's Open Graph

> - (experimental) RDFa via rdflib

[-]

NPR's Guide to Hypothesis-Driven Design for Editorial Projects

HDD – Hypothesis-Driven Development – Research, Plan, Prototype, Develop, Launch, Review.

The article lists (and links to!) "Lean UX" [1] and Google Ventures' Design Sprint Methodology as inspirations.

[1] "Lean UX: Applying Lean Principles to Improve User Experience" http://shop.oreilly.com/product/0636920021827.do

[2] https://www.gv.com/sprint/

"How To Write A Technical Paper" [3][4] has: (Related Work, System Model, Problem Statement), (Your Solution), (Analysis), (Simulation, Experimentation), (Conclusion)

[3] https://news.ycombinator.com/item?id=18226543

[4] https://westurner.github.io/hnlog/#story-18225197

[-]

Gryphon: An open-source framework for algorithmic trading in cryptocurrency

reso | 2019-06-20 14:56:56 | 236 | # | ^
[+]

> As far as I know there isn't anything out there like this, in any market (not just cryptocurrencies).

How does Gryphon compare to Catalyst (Zipline)? https://github.com/enigmampc/catalyst

They list a few example algorithms: https://enigma.co/catalyst/example-algos.html

"Ask HN: Why would anyone share trading algorithms and compare by performance?" https://news.ycombinator.com/item?id=15802834 (pyfolio, popular [Zipline] algos shared through Quantopian)

"Superalgos and the Trading Singularity" https://news.ycombinator.com/item?id=19109333 (awesome-quant,)

[+]
[+]

Would CCXT be useful here? https://github.com/ccxt/ccxt

> The ccxt library currently supports the following 135 cryptocurrency exchange markets and trading APIs:

[+]
[+]
[+]
[+]
[+]

> In any case Gryphon uses Cython to compile itself down to C, which isn't quite as good as writing in native C but is a good chunk of the way there.

Would there be any advantage to asyncio with uvloop (also written in Cython (on libuv like Node) like Pandas)? https://github.com/MagicStack/uvloop

IDK how many e.g. signals routines benefit from asyncio yet.

[+]
[+]
[+]
[+]

Whether there's anything like an equilibrium in cryptoasset markets where there are no underlying fundamentals is debatable. While there's no book price, PoW coin prices might be rationally describable in terms of (average_estimated cost of energy + cost per GH/s + 'speculative value')

A proxy for energy costs, chip costs, and speculative information

Are there standard symbols for this?

Can cryptoasset market returns be predicted with quantum harmonic oscillators as well? What NN topology can learn a quantum harmonic model? https://news.ycombinator.com/item?id=19214650

"The Carbon Footprint of Bitcoin" (2019) defines a number of symbols that could be standard in [crypto]economics texts. Figure 2 shows the "profitable efficiency" (which says nothing of investor confidence and speculative information and how we maybe overvalue teh security (in 2007-2009)). Figure 5 lists upper and lower estimates for the BTC network's electricity use. https://www.cell.com/joule/fulltext/S2542-4351(19)30255-7

Here's a cautionary dialogue about correlative and causal models that may also be relevant to a cryptoasset price NN learning experiment: https://news.ycombinator.com/item?id=20163734

[-]

Wind-Powered Car Travels Downwind Faster Than the Wind

> The unusual wind-powered car hit a top speed 2.86 times faster than the wind during one recent run,

I can't even.

[+]
[+]
[-]

NOAA upgrades the U.S. global weather forecast model

> Working with other scientists, Lin developed a model to represent how flowing air carries these substances. The new model divided the atmosphere into cells or boxes and used computer code based on the laws of physics to simulate how air and chemical substances move through each cell and around the globe.

> The model paid close attention to conserving energy, mass and momentum in the atmosphere in each box. This precision resulted in dramatic improvements in the accuracy and realism of the atmospheric chemistry.

Global Forecast System > Future https://en.wikipedia.org/wiki/Global_Forecast_System#Future

[-]

A plan to change how Harvard teaches economics

[+]
[+]
[+]
[+]
[+]

> apologists for the continuation of rent-seeking policies that entrench the rich and mighty.

This.

"THE IMF CONFIRMS THAT 'TRICKLE-DOWN' ECONOMICS IS, INDEED, A JOKE" https://psmag.com/economics/trickle-down-economics-is-indeed...

> INCREASING THE INCOME SHARE TO THE BOTTOM 20 PERCENT OF CITIZENS BY A MERE ONE PERCENT RESULTS IN A 0.38 PERCENTAGE POINT JUMP IN GDP GROWTH.

> The IMF report, authored by five economists, presents a scathing rejection of the trickle-down approach, arguing that the monetary philosophy has been used as a justification for growing income inequality over the past several decades. "Income distribution matters for growth," they write. "Specifically, if the income share of the top 20 percent increases, then GDP growth actually declined over the medium term, suggesting that the benefits do not trickle down."

"Causes and Consequences of Income Inequality: A Global Perspective" (2015) https://scholar.google.com/scholar?hl=en&as_sdt=0%2C43&q=%22...

I'll add that we tend to overlook the level of government spending during periods trickle-down economics and confound. Change in government spending (somewhat unfortunately regardless of revenues) is a relevant factor.

Let's make this economy great again? How about you identify the decade(s) you're referring to and I'll show you the tax revenue (on income and now capital gains), federal debt per capital, and the growth in GDP.

[+]

> All this is to say that, while data is useful for validation, it is not useful for prediction. The last thing we need is a black-box machine learning model to make major economic decisions off of. What we do need is proper models that are then validated, which don't necessarily need 'big data.'

Hand-wavy theory - predicated upon physical-world models of equillibrium which are themselves classical and incomplete - without validation is preferable to empirical models? Please.

Estimating the predictive power of some LaTeX equations is a different task than measuring error of a trained model.

If the model does not fit all of the big data, the error term is higher; regardless of whether the model was pulled out of a hat in front of a captive audience or deduced though inference from actual data fed through an unbiased analysis pipeline.

If the 'black-box predictive model' has lower error for all available data, the task is then to reverse the model! Not to argue for unvalidated theory.

Here are a few discussions regarding validating economic models, some excellent open econometric lectures (as notebooks that are unfortunately not in an easily-testable programmatic form), the lack of responsible validation, and some tools and datasets that may be useful for validating hand-wavy classical economic theories:

"When does the concept of equilibrium work in economics?" https://news.ycombinator.com/item?id=19214650

> "Lectures in Quantitative Economics as Python and Julia Notebooks" https://news.ycombinator.com/item?id=19083479 (data sources (pandas-datareader, pandaSDMX), tools, latex2sympy)

That's just an equation in a PDF.

(edit) Here's another useful thread: "Ask HN: Data analysis workflow?" https://news.ycombinator.com/item?id=18798244

[+]

Backtesting algorithmic trading algorithms is fairly simple: what actions would the model have taken given the available data at that time, and how would those trading decisions have affected the single objective dependent variable. Backtesting, paper trading, live trading.

Medicine (and also social sciences) is indeed more complex; but classification and prediction are still the basis for making treatment recommendations, for example.

Still, the task really is the same. A NN (like those that Torch, Theano, TensorFlow, and PyTorch produce; now with the ONNX standard for neural network model interchange) learns complex relations and really doesn't care about causality: minimize the error term. Recent progress in reducing the size of NN models e.g. for offline natural language classification on mobile devices has centered around identifying redundant neuronal connections ("from 100GB to just 0.5GB"). Reversing a NN into a far less complex symbolic model (with variable names) is not a new objective. NNs are being applied for feature selection, XGBoost wins many Kaggle competitions, and combinations thereof appear to be promising.

Actually testing second-order effects of evidence-based economic policy recommendations is certainly a complex highly-multivariate task (with unfortunate ideological digression that presumes a higher-order understanding based upon seeming truisms that are not at all validated given, in many instances, any data). A causal model may not be necessary or even reasonably explainable; and what objective dependent variables should we optimize for? Short term growth or long-term prosperity with environmental sustainability?

... "Please highly weight voluntary sustainability reporting metrics along with fundamentals" when making investments and policy decisions?

Were/are the World3 models causal? Many of their predictions have subsequently been validated. Are those policy recommendations (e.g. in "The Limits to Growth") even more applicable today, or do we need to add more labeled data and "Restart and Run All"?

...

From https://research.stlouisfed.org/useraccount/fredcast/faq/ :

> FREDcast™ is an interactive forecasting game in which players make forecasts for four economic releases: GDP, inflation, employment, and unemployment. All forecasts are for the current month—or current quarter in the case of GDP. Forecasts must be submitted by the 20th of the current month. For real GDP growth, players submit a forecast for current-quarter GDP each month during the current quarter. Forecasts for each of the four variables are scored for accuracy, and a total monthly score is obtained from these scores. Scores for each monthly forecast are based on the magnitude of the forecast error. These monthly scores are weighted over time and accumulated to give an overall performance.

> Higher scores reflect greater accuracy over time. Past months' performances are downweighted so that more-recent performance plays a larger part in the scoring.

The #GobalGoals Targets and Indicators may be our best set of variables to optimize for from 2015 through 2030; I suppose all of them are economic.

[+]

Yes, some combination of variables/features grouped and connected with operators that correlate to an optima (some of which are parameters we can specify) that occurs immediately or after a period of lag during which other variables of the given complex system are dangerously assumed to remain constant.

> In fact, this is exactly the blindness that led to people missing the financial crisis

ML was not necessary to recognize the yield curve inversion as a strongly predictive signal correlating to subsequent contraction.

An NN can certainly learn to predict according to the presence or magnitude of a yield curve inversion and which combinations of other features.

- [ ] Exercise: Learning this and other predictive signals by cherry-picking data and hand-optimizing features may be an extremely appropriate exercise.

"This field is different because it's nonlinear, very complex, there are unquantified and/or uncollected human factors, and temporal"

Maybe we're not in agreement about whether AI and ML can do causal inference just as well if not better than humans manipulating symbols with human cognition and physical world intuition. The time is nigh!

In general, while skepticism and caution are appropriate, many fields suffer from a degree of hubris which prevents them from truly embracing stronger AI in their problem domain. (A human person cannot mutate symbol trees and validate with shuffled and split test data all night long)

> Anyone trying to understand economic phenomena needs to be keenly aware of how inference can be done, which requires an understanding (or an approach to) - that is, a theory - of the underlying mechanisms.

I read this as "must be biased by the literature and willing to disregard an unacceptable error term"; but also caution against rationalizing blind findings which can easily be rationalized as logical due to any number of cognitive biases.

Compared to AI, we're not too rigorous about inductive or deductive inference; we simply store generalizations about human behavior and predict according to syntheses of activations in our human NNs.

If you're suggesting that the information theory that underlies AI and ML is insufficient to learn what we humans have learned in a few hundred years of observing and attempting to optimize, I must disagree (regardless of the hardness or softness of the given complex field). Beyond a few combinations/scenarios, our puny little brains are no match for our department's new willing AI scientist.

[+]

> AI, ML and stats will merge, if they haven't already. The distinction will disappear. I believe the issues will not.

All tools are misapplied; including economics professionals and their advice.

Here's a beautiful Venn diagram of "Colliding Web Sciences" which includes economics as a partially independent category: https://www.google.com/search?q=colliding+web+sciences&tbm=i...

A causal model is a predictive model. We must validate the error of a causal model.

Why are theoretic models hand-wavy? "That's just because noise, the model is correct." No, such a model is insufficient to predict changes in dependent variables when in the presence of noise; which is always the case. How does validating a causal model differ from validating a predictive model with historical and future data?

Yield-curve inversion as a signal can be learned by human and artificial NNs. Period. There are a few false positives in historical data: indeed, describe the variance due to "noise" by searching for additional causal and correlative relations in additional datasets.

I searched for "python causal inference" and found a few resources on the first page of search results: https://www.google.com/search?q=python+causal+inference

CausalInference: https://pypi.org/project/CausalInference/

DoWhy: https://github.com/microsoft/dowhy

CausalImpact (Python port of the R package): https://github.com/dafiti/causalimpact

"What is the best Python package for causal inference?" https://www.quora.com/What-is-the-best-Python-package-for-ca...

Search: graphical model "information theory" [causal] https://www.google.com/search?q=graphical+model+%22informati...

Search: opencog causal inference https://www.google.com/search?q=opencog+causal+inference (MOSES, PLN,)

If you were to write a pseudocode algorithm for an econometric researcher's process of causal inference (and also their cognitive processes (as executed in a NN with a topology)), how would that read?

(Edit) Something about the sufficiency of RL (Reinforcement Learning) for controlling cybernetic systems. https://en.wikipedia.org/wiki/Cybernetics

[+]

> What's the point of dumping a bunch of Google results here? At least half the results are about implementations of pretty traditional etatistical / econometric inference techniques.

Here are some tools for causal inference (and a process for finding projects to contribute to instead of arguing about insufficiency of AI/ML for our very special problem domain here). At least one AGI implementation doesn't need to do causal inference in order to predict the outcomes of actions in a noisy field.

Weather forecasting models don't / don't need to do causal inference.

> A/B testing

Is multi-armed bandit feasible for the domain? Or, in practice, are there too many concurrent changes in variables to have any sort of a controlled experiment. Then, aren't you trying to do causal inference with mostly observational data.

> I really don't see how a RL would help with any of this. Care to come up with something concrete?

The practice of developing models and continuing on with them when they seem to fit and citations or impact reinforce is very much entirely an exercise in RL. This is a control system with a feedback loop. A "Cybernetic system". It's not unique. It's not too hard for symbolic or neural AI/ML. Stronger AI can or could do [causal] inference.

[+]
[+]

> By extension, it is impossible for any ML mechanism to predict unobserved interventions without being a causal model.

In lieu of a causal model, when I ask an economist what they think is going to happen and they aren't aware of any historical data - there is no observational data collected following the given combination of variables we'd call an event or an intervention - is it causal inference that they're doing in their head? (With their NN)

> Now, you and me, we can both agree that your model with yield curves is good enough.

Yield curves alone are insufficient due to the rate of false positives. (See: ROC curves for model evalutation just like everyone else)

> We could even agree that you would have found it before the financial crashes,

The given signal was disregarded as a false positive by the appointed individuals at the time; why?

> Some alien that has been analyzing financial systems all across the universe may disagree,

You're going to run out of clean water and energy, and people will be willing to pay for unhealthy sugar water and energy-inefficient transaction networks with a perception of greater security.

That we need Martian scientist as an approach is, IMHO, necessary because of our learned biases; where we've inferred relations that have been reinforced which cloud our assessment of new and novel solutions.

> Such is the difficulty of causal analysis.

What a helpful discussion. Thanks for explaining all of this to me.

Now, I need to go write my own definitions for counterfactual and DGP and include graphical models in there somewhere.

[+]
[+]
[+]

How can you possibly be arguing that we should not be testing models with all available data?

All models are limited by the data they're trained from; regardless of whether they are derived through rigorous, standardized, unbiased analysis or though laudable divine inspiration.

From https://news.ycombinator.com/item?id=19084622 :

> pandas-datareader can pull data from e.g. FRED, Eurostat, Quandl, World Bank: https://pandas-datareader.readthedocs.io/en/latest/remote_da...

> pandaSDMX can pull SDMX data from e.g. ECB, Eurostat, ILO, IMF, OECD, UNSD, UNESCO, World Bank; with requests-cache for caching data requests: https://pandasdmx.readthedocs.io/en/latest/#supported-data-p...

[+]

> To get out of this we have to consider not only what people have done in the past but how they are likely to respond to a given policy change, for which we have no historical data prior to when the policy is enacted, and so we need to make those predictions based on logic in addition to data or we go astray.

"Pete, it's a fool who looks for logic in the chambers of the human heart."

Logically, we might have said "prohibition will reduce substance abuse harms" but the actual data indicates that margins increased. Then, we look at the success of Portugal's decriminalization efforts and cannot at all validate our logical models.

Similarly, we might've logically claimed that "deregulation of the financial industry will help everyone" or "lowering taxes will help everyone" and the data does not support.

So, while I share the concerns about Responsible AI and encoding biases (and second-order effects of making policy recommendations according to non-causal models without critically, logically thinking first) I am very skeptical about our ability to deduce causal relations without e.g. blind, randomized, longitudinal, interventional studies (which are unfortunately basically impossible to do with [economic] policy because there is no "ceteris paribus")

https://personalmba.com/second-order-effects/

"Causal Inference Book" https://news.ycombinator.com/item?id=17504366

> https://www.hsph.harvard.edu/miguel-hernan/causal-inference-...

> Causal inference (Causal reasoning) https://en.wikipedia.org/wiki/Causal_inference ( https://en.wikipedia.org/wiki/Causal_reasoning )

[+]

> If you think prohibition will reduce substance abuse but then you try it and it doesn't, well, you were wrong, so end prohibition.

Maybe we're at a local optima, though. Maybe this is a sign that we should just double down, surge on in there and get the job done by continuing to do the same thing and expecting different results. Maybe it's not the spec but the implementation.

Recommend a play according to all available data, and logic.

> This is also a strong argument for "laboratories of democracy" and local control -- if everybody agrees what to do then there is no dispute, but if they don't then let each local region have their own choice, and then we get to see what happens. It allows more experiments to be run at once. Then in the worst case the damage of doing the wrong thing is limited to a smaller area than having the same wrong policy be set nationally or internationally, and in the best case different choices are good in different ways and we get more local diversity.

"Adjusting for other factors," the analysis began.

- [ ] Exercise / procedure to be coded: Brainstorm and identify [non-independent] features that may create a more predictive model (a model with a lower error term). Search for confounding variables outside of the given data.

[-]

The New York Times course to teach its reporters data skills is now open-source

It's more work to verify all formulas that reference unnamed variables in a spreadsheet than to review the code inputs and outputs in a notebook.

"Teaching Pandas and Jupyter to Northwestern journalism students" [in DC] https://www.californiacivicdata.org/2017/06/07/dc-python-not...

> http://www.firstpythonnotebook.org/

You can also develop d3.js visualizations — just like NYT — with jupyter notebooks and whichever language(s).

"Data-Driven Journalism" ("ddj") https://en.wikipedia.org/wiki/Data-driven_journalism

http://datadrivenjournalism.net/

"The Data Journalism Handbook 1" https://datajournalism.com/read/handbook/one

"The Data Journalism Handbook 2" https://datajournalism.com/read/handbook/two

While there are a number of ScholarlyArticle journals that can publish notebooks, I'm not aware of any newspapers that are prepared to publish notebooks as NewsArticles. It's pretty easy to `jupyter convert --to html` and `--to markdown` or just 'Save as'

Regarding expressing facts as verifiable claims with structured data in HTML and/or blockchains: "Fact Checks" https://news.ycombinator.com/item?id=15529140

Does this course recommend linking to every source dataset and/or including full citations (with DOI) in the article? Does this course recommend getting a free DOI for the published revision of an e.g. GitHub project repository (containing data, and notebooks and/or the article text) with Zenodo?

[-]

No Kings: How Do You Make Good Decisions Efficiently in a Flat Organization?

Group decision-making > Formal systems: https://en.wikipedia.org/wiki/Group_decision-making#Formal_s...

> Consensus decision-making, Voting-based methods, Delphi method, Dotmocracy

Consensus decision-making: https://en.wikipedia.org/wiki/Consensus_decision-making

There's a field that some people are all calling "Collaboration Engineering". I learned about this from a university course in Collaboration.

6 Patterns of Collaboration [GRCOEB] — Generate, Reduce, Clarify, Organize, Evaluate, Build Consensus

7 Layers of Collaboration [GPrAPTeToS] — Goals, Products, Activities, Patterns of Collaboration, Techniques, Tools, Scripts

The group decision making processes described in the article may already be defined with the thinkLets design pattern language.

A person could argue against humming for various unspecified reasons.

I'll just CC this here from my notes, which everyone can read here [1]:

“Collaboration Engineering: Foundations and Opportunities” de Vreede (2009) http://aisel.aisnet.org/jais/vol10/iss3/7/

“A Seven-Layer Model of Collaboration: Separation of Concerns for Designers of Collaboration Systems” Briggs (2009) http://aisel.aisnet.org/icis2009/26/

Six Patterns of Collaboration “Defining Key Concepts for Collaboration Engineering” Briggs (2006) http://aisel.aisnet.org/amcis2006/17/

“ThinkLets: Achieving Predictable, Repeatable Patterns of Group Interaction with Group Support Systems (GSS)” http://www.academia.edu/259943/ThinkLets_Achieving_Predictab...

https://scholar.google.com/scholar?q=thinklets

[1] https://wrdrd.github.io/docs/consulting/team-building#collab...

[-]

4 Years of College, $0 in Debt: How Some Countries Make Education Affordable

It at least makes sense to pay for doctors and nurses to go to school, right? If you want to care for others and you do the work to earn satisfactory grades, I think that investing in your education would have positive ROI.

We had plans here in the US to pay for two years of community college for whoever ("America's College Promise"). IDK what happened to that? We should have called it #ObamaCollege so that everyone could attack corporate welfare and bad investments with no ROI.

New York has the Excelsior scholarship for CUNY and SUNY. Tennessee pays for college with lottery proceeds. Are there other state-level efforts to fund higher education in the US such that students can finish school debt-free or close to it?

There are MOOCs (online courses) which are worth credit hours for the percentage of people that commit to finishing the course. https://www.classcentral.com/

Khan Academy has free SAT, MCAT, NCLEX-RN, GMAT, and LSAT test prep and primary and supplementary learning resources. https://www.khanacademy.org/test-prep

Free education: https://en.wikipedia.org/wiki/Free_education

[-]

Ask HN: What jobs can a software engineer take to tackle climate change?

I'm a software engineer with a diverse background in backend, frontend development.

How do I find jobs related to tackling global warming and climate change in Europe for an English speaker?

Open to ideas and thoughts.

> I'm a software engineer with a diverse background in backend, frontend development.

> How do I find jobs related to tackling global warming and climate change in Europe for an English speaker?

While not directly answering the question, here are some ideas for purchasing, donating, creating new positions, and hiring people that care:

Write more efficient code. Write more efficient compilers. Optimize interpretation and compilation so that the code written by people with domain knowledge who aren't that great at programming who are trying to solve other important problems is more efficient.

Push for PPAs (Power Purchase Agreements) that offset energy use. Push for directly sourcing clean energy.

Use services that at least have 100% PPAs for the energy they use: services that run on clean energy sources.

Choose green datacenters.

- [ ] Add the capability for cloud resource schedulers like Kubernetes and Terraform to prefer or require clean energy datacenters.

Choose to work with companies that voluntarily choose to do sustainability reporting.

Work to help develop (and popularize) blockchain solutions that are more energy efficient and that have equal or better security assurances as less efficient chains.

Advocate for clean energy. Donate to NGOs working for our environment and for clean energy.

Invest in clean energy. There are a number of clean energy ETFs, for example. Better energy storage is a good investment.

Push for certified green buildings and datacenters.

- [ ] We should create some sort of a badge and structured data (JSONLD, RDFa, Microdata) for site headers and/or footers that lets consumers know that we're working toward '200% green' so that we can vote with our money.

Do not vote for people who are rolling back regulations that protect our environment. Pay an organization that pays lobbyists to work the system: that's the game.

Help explain why it's both environment-rational and cost-rational to align with national and international environmental sustainability and clean energy objectives.

Argue that we should make external costs internal in order that markets will optimize for what we actually want.

Thermodynamics is part of the physics curriculum for many software engineering and computer science degrees.

There are a number of existing solutions that solve for energy inefficiency due to unreclaimed waste heat.

"Thermodynamics of Computation Wiki" https://news.ycombinator.com/item?id=18146854

"Why Do Computers Use So Much Energy?" https://news.ycombinator.com/item?id=18139654

[-]

YC's request for startups: Government 2.0

There's money to be earned in solving for the #GlobalGoals Goals, Targets, and Indicators:

The Global Goals

1. No Poverty

2. Zero Hunger

3. Good Health & Well-Being

4. Quality Education

5. Gender Equality

6. Clean Water & Sanitation

7. Affordable & Clean Energy

8. Decent Work & Economic Growth

9. Industry, Innovation & Infrastructure

10. Reduced Inequalities

11. Sustainable Cities and Communities

12. Responsible Consumption & Production

13. Climate Action

14. Life Below Water

15. Life on Land

16. Peace and Justice & Strong Institutions

17. Partnerships for the Goals

https://en.wikipedia.org/wiki/Sustainable_Development_Goals

[-]

Almost 40% of Americans Would Struggle to Cover a $400 Emergency

[+]

> I always wonder what proportion of that group is due to insufficient income

According to the Social Security Administration [1]:

2017 Average net compensation: 48,251.57

2017 Median net compensation: 31,561.49

The FPL (Federal Poverty Level) income numbers for Medicaid and the Children's Health Insurance Program (CHIP) eligibility [2]:

>> $12,140 for individuals, $16,460 for a family of 2, $20,780 for a family of 3, $25,100 for a family of 4, $29,420 for a family of 5, $33,740 for a family of 6, $38,060 for a family of 7, $42,380 for a family of 8

Wages are not keeping up with corporate profits. That can't all be due to automation.

The minimum wage is only one factor linked to price inflation. We can raise wages and still keep inflation down to an ideal range.

Maybe it's that we don't understand what it's like to live on $12K or $32K a year (without healthcare due to lack of Medicaid expansion; due to our collective failure to instill charity as a virtue and getting people back on their feet as a good investment). How could we learn (or remember!) about what it's like to be in this position (without zero-interest bank loans to bail us out)?

> and what proportion is due to terrible financial literacy.

The r/personalfinance wiki is one good resource for personal finance. From [3]:

>> Personal Finance (budgets, interest, growth, inflation, retirement)

Personal Finance https://en.wikipedia.org/wiki/Personal_finance

Khan Academy > College, careers, and more > Personal finance https://www.khanacademy.org/college-careers-more/personal-fi...

"CS 007: Personal Finance For Engineers" https://cs007.blog

https://reddit.com/r/personalfinance/wiki

... How can we make personal finance a required middle and high school curriculum component? [4]

"What are some ways that you can save money in order to meet or exceed inflation?"

Dave Ramsey's 7 Baby Steps to financial freedom [5] seem like good advice? Is the debt snowball method ideal for minimizing interest payments?

[1] https://www.ssa.gov/OACT/COLA/central.html

[2] https://www.healthcare.gov/glossary/federal-poverty-level-fp...

[3] "Ask HN: How can you save money while living on poverty level?" https://news.ycombinator.com/item?id=18894582

[4] "Consumer science (a.k.a. home economics) as a college major" https://news.ycombinator.com/item?id=17894632

[5] https://www.daveramsey.com/dave-ramsey-7-baby-steps

[-]

Congress should grow the Digital Services budget, it more than pays for itself

> The U.S. Digital Service isn’t perfect, but it is clearly working. The team estimates that for every $1 million invested in USDS that the government will avoid spending $5 million and save thousands of labor hours. Over a five-year period, the team’s efforts will save $1.1 billion, redirect almost 2,000 labor years towards higher value work, and generate over 400 percent return on investment. Most importantly, USDS will continue to deliver better government services for the American people, including Veterans who deserve better.

> In the private sector, these kinds of numbers would not lead to a 50 percent cut in budget. Instead, you’d clearly invest further with that kind of return. Considering the ambitious goals set out in the President’s Management Agenda, the Trump Administration should double down on better support for the public, our troops, and our veterans. The best way to do that is clearly through investments like USDS.

Why would you halve the budget of a team that's yielding a more than 400% ROI (in terms of cost savings)?

https://en.wikipedia.org/wiki/United_States_Digital_Service

[+]

USDS reports 400% ROI in savings to the taxpayers who fund the government with tax revenue (instead of kicking the can down the road with debt financing) and improvements in customer service quality.

https://www.usaspending.gov (Federal Funding Accountability and Transparency Act of 2006 (Obama, McCain, Carper, Coburn)) has more fine-grained spending data, but not credit-free immutable distributed ledger transaction IDs, quantitative ROI stats, or performance.gov and #globalgoals goal alignment. We'd need a metadata field on spending bills to link to performance.gov and SDG Goals, Targets, and Indicators.

"Transparency and Accountability"

IIRC, here on HN, I've mentioned a number of times -- and quoted in the full from -- the 13 plays of the USDS Digital Services Playbook; all of which are applicable to and should probably be required reading for all government IT and govtech: https://playbook.cio.gov/

There are forms with workflow states that need human review sometimes. USDS helps with getting those processes online in order to reduce costs, increase cost-efficiency, and increase quality of service.

The Trillion-Dollar Annual Interest Payment

> Given the recent actions of Congress, and the years of prior inaction in changing the nation’s fiscal path, the U.S. government’s annual interest payment will eclipse annual defense spending in only six years. By 2025, annual interest costs on the national debt will reach $724 billion, while annual defense spending will reach $706 billion. To put that into perspective, in the 2018 fiscal year, the U.S. government spent $325 billion in interest payments and spent $622 billion in defense (Exhibit 2).

Why would you cut taxes and debt finance our nation's future?

[-]

Oak, a Free and Open Certificate Transparency Log

[+]
[+]

> Great use case for blockchain technology

>> CT logs are already chained

Trillian is a centralized Merkle tree: it doesn't support native replication (AFAIU?) and there is a still a password that can delete or recreate the chain (though we can track for any such inappropriate or errant modifications (due to e.g. solar flares) by manually replicating and verifying every entry in the chain, or trusting that everything before whatever we consider to be a known hash (that could be colliding) is unmodified (since the last time we never verified those entries)).

According to the trillian README, trillian depends upon MySQL/MariaDB and thus internal/private replication is as good as the SQL replication model (which doesn't have a distributed consensus algorithm like e.g. paxos).

A Merkle tree alone is not a blockchain; though it provides more assurance of data integrity than a regular tree, verifying that the whole chain of hashes actually is good and distributed replication without configuring e.g. SSL certs are primary features of blockchains.

[+]

Which components of the system are we discussing?

PKI is necessarily centralized: certs depend upon CA certs which can depend upon CA certs. If any CA is compromised (e.g. by theft or brute force (which is inestimably infeasible given current ASIC resources' preference for legit income)) that CA can sign any CRL. A CT log and a CT log verifier can help us discover that a redundant and so possibly unauthorized cert has been issued for a given domain listed in an x.509 cert CN/SAN.

The CT log itself - trillian, for Google and now LetsEncrypt, too - though, runs on MySQL; which has one root password.

The system of multiple independent, redundant CT logs is built upon databases that depend upon presumably manually configured replication keys.

Does my browser call a remote log verifier API over (hopefully pinned with a better fingerprint than MD5) HTTPS?

[+]

Centralized and decentralized are overloaded terms. We could argue that every system that depends upon DNS is a centralized (and thus has a single point of failure).

We could describe replication models as centralized or decentralized. Master/master SQL replication is still not decentralized (regardless of whether there are multiple A records or multiple static IPs configured in the client).

With PKI, we choose the convenience of trusting a CA bundle over having to manually check every cert fingerprint.

Whether a particular chain is centralized or decentralized is often bandied about. When there are a few mining pools that effectively choose which changes are accepted, that's not decentralized either.

That there are multiple redundant independent CT logs is a good thing.

How do I, as a concerned user, securely download (and securely mirror?) one or all of the CT logs and verify that none of the record hashes don't depend upon the previous hash? If the browser relies upon a centralized API for checking hash fingerprints, how is that decentralized?

[+]
[-]

Death rates from energy production per TWh

Apparently the deaths are justified because energy.

Are the subsidies and taxes (incentives and penalties) rational in light of the relative harms of each form of energy?

"Study: U.S. Fossil Fuel Subsidies Exceed Pentagon Spending" https://www.rollingstone.com/politics/politics-news/fossil-f...

> The IMF found that direct and indirect subsidies for coal, oil and gas in the U.S. reached $649 billion in 2015. Pentagon spending that same year was $599 billion.

> The study defines “subsidy” very broadly, as many economists do. It accounts for the “differences between actual consumer fuel prices and how much consumers would pay if prices fully reflected supply costs plus the taxes needed to reflect environmental costs” and other damage, including premature deaths from air pollution.

IDK whether they've included the costs of responding to requests for help with natural disasters that are more probable due to climate change caused by these "externalties" / "external costs" of fossil fuels.

[+]

Why isn't the market choosing the least harmful, least lethal energy sources? Energy is for the most part entirely substitutable: switching costs for consumers like hospitals are basically zero.

(Everyone is free to invest in clean energy at any time)

[+]

100% Renewable Energy https://en.wikipedia.org/wiki/100%25_renewable_energy

> The main barriers to the widespread implementation of large-scale renewable energy and low-carbon energy strategies are political rather than technological. According to the 2013 Post Carbon Pathways report, which reviewed many international studies, the key roadblocks are: climate change denial, the fossil fuels lobby, political inaction, unsustainable energy consumption, outdated energy infrastructure, and financial constraints.

We need to make the external costs of energy production internal in order to create incentives to prevent these fossil fuel deaths and other costs.

[-]

Use links not keys to represent relationships in APIs

A thing may be identified by a URI (/person/123) for which there are zero or more URL routes (/person/123, /v1/person/123). Each additional route complicates caching; redirects are cheap for the server but slower for clients.

JSONLD does define a standard way to indicate that a value is a link: @id (which can be specified in a/an @context) https://www.w3.org/TR/json-ld11/

One additional downside to storing URIs instead of bare references is that it's more complicated to validate a URI template than a simple regex like \d+ or [abcdef\=\d+]+

[+]
[-]

No Python in Red Hat Linux 8?

/usr/bin/python can point to either /usr/bin/python3 or (as PEP 394 currently recommends) /usr/bin/python2

  $ alternatives --config python
FWIU, there are ubi8/python-27 and ubi8/python-36 docker images. IDK if they set /usr/bin/python out of the box? Changing existing shebangs may not be practical for some applications (which will need to specify 'python4' whenever that occurs over the next 10 supported years of RHEL/CENTOS 8)

[-]

JMAP: A modern, open email protocol

What are the optimizations in JMAP that make it faster than, say, Solid? Solid is built on a bunch of W3C Web, Security, and Linked Data Standards; LDP: Linked Data Protocol, JSON-LD: JSON Linked Data, WebID-TLS, REST, WebSockets, LDN: Linked Data Notifications. [1][2] Different worlds, I suppose.

There's no reason you couldn't represent RFC5322 data with RDF as JSONLD. There's now a way to do streaming JSON-LD.

LDP does paging and querying.

Solid supports pubsub with WebSockets and LDN. It may or may not (yet?) be as efficient for synchronization as JMAP, but it's definitely designed for all types of objects with linked data web standards; and client APIs can just parse JSON-LD.

[1] https://github.com/solid/information#solid-specifications

[2] https://github.com/solid/solid-spec/issues/123 "WebSockets and HTTP/2" SSE (Server-Side Events)

https://jmap.io/

JMAP: JSON Meta Application Protocol https://en.wikipedia.org/wiki/JSON_Meta_Application_Protocol

Is there a OpenAPI Specification for JMAP? There are a bunch of tools for Swagger / OpenAPIs: DRY interactive API docs, server implementations, code generators: https://swagger.io/tools/open-source/ https://openapi.tools/

Does JMAP support labels; such that I don't need to download a message and an attachment and mark it as read twice like labels over IMAP?

How does this integrate with webauthn; is that a different layer?

(edit) Other email things: openpgpjs; Web Key Directory /.well-known/openpgpkey/*; if there's no webserver on the MX domain, you can use the ACME DNS challenge to get free 3-month certs from LetsEncrypt.

https://wiki.gnupg.org/WKD

[+]

> If we hypothetically allow for equal adoption & mindshare of both, and assume both are non-terrible designs, I'd guess the one designed for "all types of objects" is less likely to ever be as efficient as the one designed with a single use-case in mind.

This is a generalization that is not supported by any data.

Standards enable competing solutions. Competing solutions often result in performance gains and efficiency.

Hopefully, there will be performant implementations and we won't need to reinvent the wheel in order to synchronize and send notifications for email, contacts, and calendars.

[+]

To eliminate the need for domain-specific parser implementations on both server and client, make it easy to index and search this structured data, and to link things with URIs and URLs like other web applications that also make lots of copies.

Solid is a platform for decentralized linked data storage and retrieval with access controls, notifications, WebID + OAuth/OpenID. The Wikipedia link and spec documents have a more complete description that could be retrieved and stored locally.

[-]

Grid Optimization Competition

From "California grid data is live – solar developers take note" https://news.ycombinator.com/item?id=18855820 :

>> It looks like California is at least two generations of technology ahead of other states. Let’s hope the rest of us catch up, so that we have a grid that can make an asset out of every building, every battery, and every solar system.

> +1. Are there any other states with similar grid data available for optimization; or any plans to require or voluntarily offer such a useful capability?

How do these competitions and the live actual data from California-only (so far; AFAIU) compare?

Are there standards for this grid data yet? Without standards, how generalizable are the competition solutions to real-world data?

[-]

Blockchain's present opportunity: data interchange standardization

What are the current standards efforts for blockchain data interchange?

W3C JSON-LD, ld-signatures + lds-merkleproof2017 (normalize the data before signing it so that the signature is representation-independent (JSONLD, RDFa, RDF, n-triples)), W3C DID Decentralized Identifiers, W3C Verifiable Claims, Blockcerts.org

W3C Credentials Community Group: https://w3c-ccg.github.io/community/work_items.html#draft-sp... (DID, Multihash (IETF), [...])

"Blockchain Credential Resources; a gist" https://gist.github.com/westurner/4345987bb29fca700f52163c33...

Specifically for payments:

https://www.w3.org/TR/?title=payment (the W3C Payment Request API standardizes browser UI payment/checkout workflows)

ILP: Interledger Protocol https://interledger.org/rfcs/0027-interledger-protocol-4/

> W3C JSON-LD

https://www.w3.org/TR/json-ld/ (JSON-LD 1.0)

https://www.w3.org/TR/json-ld11/ (JSON-LD 1.1)

> ld-signatures + lds-merkleproof2017 (normalize the data before signing it so that the signature is representation-independent (JSONLD, RDFa, RDF, n-triples))

https://w3c-dvcg.github.io/ld-signatures/

https://w3c-dvcg.github.io/lds-merkleproof2017/ (2017 Merkle Proof Linked Data Signature Suite)

> W3C DID Decentralized Identifiers

https://w3c-ccg.github.io/did-primer/

>> A Decentralized Identifier (DID) is a new type of identifier that is globally unique, resolveable with high availability, and cryptographically verifiable. DIDs are typically associated with cryptographic material, such as public keys, and service endpoints, for establishing secure communication channels. DIDs are useful for any application that benefits from self-administered, cryptographically verifiable identifiers such as personal identifiers, organizational identifiers, and identifiers for Internet of Things scenarios. For example, current commercial deployments of W3C Verifiable Credentials heavily utilize Decentralized Identifiers to identify people, organizations, and things and to achieve a number of security and privacy-protecting guarantees.

> W3C Verifiable Claims

https://github.com/w3c/verifiable-claims

https://w3c.github.io/vc-data-model/ (Data Model)

https://w3c.github.io/vc-use-cases/ (Use Cases: Education, Healthcare, Professional Credentials, Legal Identity,)

> Blockcerts.org

https://blockcerts.org/

[-]

Ask HN: Value of “Shares of Stock options” when joining a startup

I got an offer from a US start-up (well +25 employees) which has an office in EU where I would join them.

The offer's base salary is good (ie. higher than average for senior positions for that location) but I intend to negotiate it further, as I have possible other options. patio11's negotiation guide was a great read in that regard.

However, I'm relocating from a non-EU/US country, and I don't have a single idea about the financial systems, stock markets, and how to evaluate "15k shares of stock-options" or what "Stock Option and Grant Plan" means, I'm asking you fellow HNers about this part.

Do I just treat them as worthless and focus on base salary (as some internet sources suggest) or is there a formula to evaluate what they would be worth in say 2 years for instance ?

There are a number of options/equity calculators:

https://tldroptions.io/ ("~65% of companies will never exit", "~15% of companies will have low exits*", "~20% of companies will make you money")

https://comp.data.frontapp.com/ "Compensation and Equity Calculator"

http://optionsworth.com/ "What are my options worth?"

http://foundrs.com/ "Co-Founder Equity Calculator"

[-]

CMU Computer Systems: Self-Grading Lab Assignments (2018)

These look fun; in particular the "Attack Lab".

Dockerfiles might be helpful and easy to keep updated. Alpine Linux or just busybox are probably sufficient?

The instructor set could extend FROM the assignment image and run a few tests with e.g. testinfra (pytest)

You can also test code written in C with gtest.

I haven't read through all of the materials: are there suggested (automated) fuzzing tools? Does OSS-Fuzz solve?

Are there references to CWE and/or the SEI CERT C Coding Standard rules? https://wiki.sei.cmu.edu/confluence/plugins/servlet/mobile?c...

"How could we have changed our development process to catch these bugs/vulns before release?"

"If we have 100% [...] test coverage, would that mean we've prevented these vulns?"

What about 200%?

[+]
[+]

⟨100%| + |100%⟩ = 200%!

(Even code with 100% branch coverage may have common weaknesses like those that these (great) labs have students exploit)

[-]

Show HN: Debugging-Friendly Tracebacks for Python

cknd | 2019-04-28 14:50:29 | 121 | # | ^

pytest also has helpful tracebacks; though only for test runs.

With nose-progressive, you can specify --progressive-editor or update the .noserc so that traceback filepaths are prefixed with your preferred editor command.

vim-unstack parses paths from stack traces / tracebacks (for a number of languages including Python) and opens each in a split at that line number. https://github.com/mattboehm/vim-unstack

Here's the Python regex from my hackish pytb2paths.sh script:

  '\s+File "(?P<file>.*)", line (?P<lineno>\d+), in (?P<modulestr>.*)$'
https://github.com/westurner/dotfiles/blob/develop/scripts/p...

[-]

Why isn't 1 a prime number?

[+]
[+]

> You can also dial emergency contacts without unlocking the phone. They are accessible from the medical ID page on iOS, I assume Android has similar.

You can set a Lock Screen Message by searching for "Lock Screen Message" in the Android Settings.

You can also create an "ICE (In Case of Emergency)" contact.

[-]

Rare and strange ICD-10 codes

zdw | 2019-04-27 21:50:58 | 68 | # | ^
[+]
[+]
[+]

> No, you misunderstand the terminology. "Subsequent encounter" means with the doctor not with the rattlesnake

You can reference ICD codes with the schema.org/code property of schema.org/MedicalEntity and subclasses. https://schema.org/docs/meddocs.html

"Subsequent encounter" is poorly defined. IMHO, there should be a code for this.

[+]
[-]

Python Requests III

[+]

asyncio, HTTP/2, connection pooling, timeouts, Python 3.6+

README > "Feature Support" https://github.com/kennethreitz/requests3/blob/master/README...

[+]
[-]

Post-surgical deaths in Scotland drop by a third, attributed to a checklist

fanf2 | 2019-04-17 09:43:04 | 1036 | # | ^
[+]
[+]
[+]
[+]

GitHub and GitLab support task checklists in Markdown and also project boards which add and remove labels like 'ready' and 'in progress' when cards are moved between board columns; like kanban:

- [ ] not complete

- [x] completed

Other tools support additional per-task workflow states:

- [o] open

- [x (2019-04-17)] completed on date

I worked on a large hospital internal software project where the task was to build a system for reusable checklists editable through the web that prints them out in duplicate or triplicate at nearby printers. People really liked having the tangible paper copy.

"The Checklist Manifesto" by Atul Gawande was published while I worked there. TIL pilots have been using checklists for process control in order to reduce error for many years.

Evernote, RememberTheMilk, Google Tasks, and Google Keep all support checklists. Asana and Gitea and TaskWarrior support task dependencies.

A person could carry around a Hipster PDA with Bullet Journal style tasks lists with checkboxes; printed from a GTD service with an API and a @media print CSS stylesheet: https://en.wikipedia.org/wiki/Hipster_PDA

I'm not aware of very many tools that support authoring reusable checklists with structured data elements and data validation.

...

There are a number of configuration management systems like Puppet, Chef, Salt, and Ansible that build a graph of completable and verifiable tasks and then depth-first traverse said graph (either with hash randomization resulting in sometimes different traversals or with source order as an implicit ordering)

Resource scheduling systems like operating systems and conference room schedulers can take ~task priority into account when optimally ordering tasks given available resources; like triage.

Scheduling algorithms: https://news.ycombinator.com/item?id=15267146

TodoMVC catalogs Todo list implementations with very many MV* JS Frameworks: http://todomvc.com

[+]

For sure. Though many tools don't read .txt (or .md/.markdown) files.

GitHub and GitLab support (multiple) Issue and Pull Request templates:

Default: /.github/ISSUE_TEMPLATE.md || Configure in web interface

/.github/ISSUE_TEMPLATE/Name.md || /.gitlab/issue_templates/Name.md

Default: /.github/PULL_REQUEST_TEMPLATE.md || Configure in web interface

/.github/PULL_REQUEST_TEMPLATE/Name.md || /.gitlab/merge_request_templates/Name.md

There are template templates in awesome-github-templates [1] and checklist template templates in github-issue-templates [2].

[1] https://github.com/devspace/awesome-github-templates

[2] https://github.com/stevemao/github-issue-templates

[+]

Mattermost supports threaded replies and Markdown with checklist checkboxes

You can post GitHub/GitLab project updates to a Slack/Mattermost channel with webhooks (and search for and display GH/GL issues with /slash commands); though issue edits and checkbox state changes aren't (yet?) included in the events that channels receive.

[-]

Apply to Y Combinator

[+]
[+]

Here's the list of the 1,900 Y Combinator companies through Winter 2019 (W19) https://www.ycombinator.com/companies/

"Startup Playbook" by Sam Altman (YC Founder) and Illustrated by Gregory Koberger is also a good read: https://playbook.samaltman.com/

[-]

Trunk-Based Development vs. Git Flow

One major advantage of the gitflow/hubflow git workflows is that there is a standard way of merging across branches. For example, a 'hotfix' branch is merged into the stable master branch and also develop with one standard command; there's no need to re-explain and train new devs on how the branches were supposed to work here. I even copied the diagram(s) into my notes: https://westurner.github.io/tools/#hubflow

IMHO, `git log` on the stable master branch containing each and every tagged release is preferable to having multiple open release branches.

Requiring tests to pass before a PR gets merged is a good policy that's independent of the trunk or gitflow workflow decision.

[-]

Ask HN: Anyone else write the commit message before they start coding?

I feel like I just learned how to use Git: writing the message first thing has made me a lot more productive. I'm wondering if anyone else does this; I know test driven development is a thing, where people write tests before code, and this seems like a logical extension.

What a great idea. Are you updating the commit message with `git commit --amend` until you squash and push, or writing a novel on the side?

BDD acceptance tests can be written in a pseudo-prose syntax (and ideally, executed)

[-]

Ask HN: Datalog as the only language for web programming, logic and database

Can Datalog be used as the only language which we can use for writing server-side web application, complex domain business logic and database querying?

Are there any efforts made in this direction.

To quote myself from a post the other day https://news.ycombinator.com/item?id=19407170 :

> PyDatalog does Datalog (which is ~Prolog, but similar and very capable) logic programming with SQLAlchemy (and database indexes) and apparently NoSQL support. https://sites.google.com/site/pydatalog/

> Datalog: https://en.wikipedia.org/wiki/Datalog

> ... TBH, IDK about logic programming and bad facts. Resilience to incorrect and incredible information is - I suppose - a desirable feature of any learning system that reevaluates its learnings as additional and contradictory information makes its way into the datastores.

I'm not sure that Datalog is really necessary for most CRUD operations; SQLAlchemy and the SQLAlchemy ORM are generally sufficient for standard database querying CRUD.

[-]

Is there a program like codeacademy but for learning sysadmin?

if not, anyone wanna build one?

A few sysadmin and devops curriculum resources; though none but Beaker and Molecule are interactive with any sort of testing AFAIU:

"System Administrator" https://en.wikipedia.org/wiki/System_administrator

"Software Configuration Management" (SCM) https://en.wikipedia.org/wiki/Software_configuration_managem...

"DevOps" https://en.wikipedia.org/wiki/DevOps

"OpsSchool Curriculum" http://www.opsschool.org

- Soft Skills 101, 201

- Labs Exercises

- Free. Contribute

awesome-sysadmin > configuration-management https://github.com/kahun/awesome-sysadmin/blob/master/README...

- This could list reusable module collections such as Puppet Forge and Ansible Galaxy;

- And module testing tools like Puppet Beaker and Ansible Molecule (that can use Vagrant or Docker to test a [set of] machines)

https://github.com/stack72/ops-books

- I'd add "Time Management for System Administrators" (2005)

https://landing.google.com/sre/books/

- There's now a "Site Reliability Workbook" to go along with the Google SRE book. Both are free online.

https://response.pagerduty.com

- The PagerDuty Incident Response Documentation is also free online.

- OpsGenie has a free plan also with incident response alerting and on-call management.

There are a number of awesome-devops lists.

Minikube and microk8s package Kubernetes into a nice bundle of distributed systems components that'll run on Lin, Mac, Win. You can convert docker-compose.yml configs to Kubernetes pods when you decide that it should've been HA with a load balancer SPOF and x.509 certs and a DRP (Disaster Recovery Plan) from the start!

[-]

Maybe You Don't Need Kubernetes

ra7 | 2019-03-22 17:18:44 | 500 | # | ^
[+]
[+]

> As Kernighan said back in the 1970's, "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"

What a great quote. Thanks

[+]

> Tools like Ansible and Puppet, as great as they are, do not guarantee your infrastructure will end up in the state you defined and you easily end up with broken services.

False dilemma. Ansible and Puppet are great tools for configuring kubernetes, kubernetes worker nodes, and building container images.

Kubernetes does not solve for host OS maintenance; though there are a number of host OS projects which remove most of what they consider to be unnecessary services, there's still need to upgrade kubernetes nodes and move pods out of the way first (which can be done with e.g. Puppet or Ansible).

As well, it may not be appropriate for monitoring to depend upon kubernetes; there again you have nodes to manage with an SCM tool.

[-]

Quantum Machine Appears to Defy Universe’s Push for Disorder

[+]

"Scar (physics)": https://en.wikipedia.org/wiki/Scar_(physics)

> Scars are unexpected in the sense that stationary classical distributions at the same energy are completely uniform in space with no special concentrations along periodic orbits, and quantum chaos theory of energy spectra gave no hint of their existence

[-]

Pytype checks and infers types for your Python code

How does pytype compare with the PyAnnotate [1] and MonkeyType [2] dynamic / runtime PEP-484 type annotation type inference tools?

[1] https://github.com/dropbox/pyannotate

[2] https://github.com/Instagram/MonkeyType

[-]

How I'm able to take notes in mathematics lectures using LaTeX and Vim

[+]
[+]

> The mechanical task of taking notes is one of the most important parts of actually absorbing the material. It is not an either-or. Hearing/seeing the information, processing it in a way that makes sense to you individually, and then mechanically writing it down in a legible manner is one of the main methods that your brain learns. It's one of the primary reasons that taking notes is important in the first place. This is referred to as the "encoding hypothesis" [1].

There's almost certainly an advantage to learning to think about math using a publishable symbol set like LaTeX.

We learn by reinforcement; with feedback loops that may take until weeks later in a typical university course.

> There are actually even studies [2] that show that tools that assist in more efficient note taking, such as taking notes via typing rather than by hand, are actually detrimental to absorbing information, as it makes it easier for you to effectively pass the information directly from your ears to your computer without actually doing the processing that is required when writing notes by hand.

Handwriting notes is impractical for some people due to e.g. injury and illegibility.

The linked study regarding retention and handwritten versus typed notes has been debunked with references that are referenced elsewhere in comments on this post. There have been a few studies with insufficient controls (lack of randomization, for one) which have been widely repeated by educators who want to be given attention.

Doodling has been shown to increase information retention. Maybe doodling as a control really would be appropriate.

Banning laptops from lectures is not respectful of students with injury and illegible handwriting. Asking people to put their phones on silent (so they can still make and take emergency calls) and refrain from distracting other students with irrelevant content on their computers is reasonable and considerate.

(What a cool approach to math note-taking. I feel a bit inferior because I haven't committed to learning that valuable, helpful skill and so that's stupid and you're just wasting your time because that's not even necessary when all you need to do is retain the information you've paid for for the next few months at most. If course, once you get on the job, you'll never always be using that tool and e.g. latex2sympy to actually apply that theory to solving a problem that people are willing to pay for. So, thanks for the tips and kudos, idiot)

[-]

LHCb discovers matter-antimatter asymmetry in charm quarks

So, does this disprove all of supersymmetry? https://en.wikipedia.org/wiki/Supersymmetry

[+]

Ah, thanks.

"CPT Symmetry" https://en.wikipedia.org/wiki/CPT_symmetry

"CP Violations" https://en.wikipedia.org/wiki/CP_violation

"Charm quark" https://en.wikipedia.org/wiki/Charm_quark :

> The antiparticle of the charm quark is the charm antiquark (sometimes called anticharm quark or simply anticharm), which differs from it only in that some of its properties have equal magnitude but opposite sign.

[-]

React Router v5

[+]
[+]
[+]
[+]

Accidentally downvoted on mobile (and upvoted two others). Thanks for this.

"Scroll Restoration" https://reacttraining.com/react-router/web/guides/scroll-res...

[-]

Experimental rejection of observer-independence in the quantum world

Objective truth!? A question for epistemologists to decide.

How could they record their high entropy (?) solipsistic observations in an immutable datastore in such as way as to have probably zero knowledge of the other party's observations?

Anyways, that's why I only read the title and the abstract.

Wigner's friend experiment: https://en.wikipedia.org/wiki/Wigner%27s_friend

[-]

Show HN: A simple Prolog Interpreter written in a few lines of Python 3

Cool tests! PyDatalog does Datalog (which is ~Prolog, but similar and very capable) logic programming with SQLAlchemy (and database indexes) and apparently NoSQL support. https://sites.google.com/site/pydatalog/

Datalog: https://en.wikipedia.org/wiki/Datalog

... TBH, IDK about logic programming and bad facts. Resilience to incorrect and incredible information is - I suppose - a desirable feature of any learning system that reevaluates its learnings as additional and contradictory information makes its way into the datastores.

[+]
[-]

How to earn your macroeconomics and finance white belt as a software developer

Thanks for the wealth of resources in this post. Here are a few more:

"Python for Finance: Analyze Big Financial Data" (2014, 2018) https://g.co/kgs/qkY8J6 ... https://pyalgo.tpq.io also includes the "Finance with Python" course and this book as a PDF and Jupyter notebooks.

Quantopian put out a call for the best Value Investing algos (implemented in quantopian/zipline) awhile back. This post links to those and other value investing resources: https://westurner.github.io/hnlog/#comment-19181453 (Ctrl-F "econo")

"Lectures in Quantitative Economics as Python and Julia Notebooks" https://news.ycombinator.com/item?id=19083479 links to these excellent lectures and a number of tools for working with actual data from FRED, ECB, Eurostat, ILO, IMF, OECD, UNSD, UNESCO, World Bank, Quandl.

One thing that many finance majors, courses, and resources often fail to identify is the role that startup and small businesses play in economic growth and actual value creation: jobs, GDP, return on direct capital investment. Most do not succeed, but it is possible to do better than index funds and have far more impact in terms of sustainable investment than as an owner of a nearly-sure-bet index fund that owns some shares and takes a hands-off approach to business management, research, product development, and operations.

Is it possible to possess a comprehensive understanding of finance and economics but still not have personal finance down? Personal finance: r/personalfinance/wiki, "Consumer science (a.k.a. home economics) as a college major" https://news.ycombinator.com/item?id=17894632

[-]

Ask HN: Relationship between set theory and category theory

I have an idea about the relationship between set theory and category theory and I would like some feedback. I would like others to see it too, and I don't know how to do it. I think it's at least interesting to look at as a slightly crazy collage, but I was a bit more excited than normal when the idea hit, so I just had to dump it all at once in this image: https://twitter.com/FamilialRhino/status/1101777965724168193 (You will have to zoom the picture in order to be able to read the scribbles.)

It has to do with resonance in the energy flowing in emergent networks. Can't quite put my finger on it, so I'll be here to answer any questions.

Thanks for reading.

"Categorical set theory" > "References" https://en.wikipedia.org/wiki/Categorical_set_theory#Referen...

From "Homotopy category" > "Concrete categories" https://en.wikipedia.org/wiki/Homotopy_category#Concrete_cat... :

> While the objects of a homotopy category are sets (with additional structure), the morphisms are not actual functions between them, but rather a classes of functions (in the naive homotopy category) or "zigzags" of functions (in the homotopy category). Indeed, Freyd showed that neither the naive homotopy category of pointed spaces nor the homotopy category of pointed spaces is a concrete category. That is, there is no faithful functor from these categories to the category of sets.

[+]
[-]

The most popular docker images each contain at least 30 vulnerabilities

[+]
[+]
[+]
[+]
[+]

I don't think this is a tooling problem at all.

"The tooling makes it too easy to do it wrong." Compared to shell scripts with package manager invocations? Nobody configures a system with just packages: there are always scripts to call, chroots to create, users and groups to create, passwords to set, firewall policies to update, etc.

There are a bunch of ways to create LXC containers: shell scripts, Docker, ansible. Shell scripts preceded Docker: you can write a function to stop, create an intermediate tarball, and then proceed (so that you don't have to run e.g. debootstrap without a mirror every time you manually test your system build script; so that you can cache build steps that completed successfully).

With Docker images, the correct thing to do is to extend FROM the image you want to use, build the whole thing yourself, and then tag and store your image in a container repository. Neither should you rely upon months-old liveCD images.

"You should just build containers on busybox." So, no package management? A whole ensemble of custom builds to manually maintain (with no AppArmor or SELinux labels)? Maintainers may prefer for distros to field bug reports for their own common build configurations and known-good package sets. Please don't run as root in a container ("because it's only a container that'll get restarted someday"). Busybox is not a sufficient OS distribution.

It's not the tools, it's how people are choosing to use them. They can, could, and should try and use idempotent package management tasks within their container build scripts; but they don't and that's not Bash/Ash/POSIX's fault either.

> With Docker images, the correct thing to do is to extend FROM the image you want to use, build the whole thing yourself, and then tag and store your image in a container repository. Neither should you rely upon months-old liveCD images.

This should rebuild all. There should be an e.g. `apt-get upgrade -y && rm -rf /var/lib/apt/lists` in there somewhere (because base images are usually not totally current (and neither are install ISOs)).

`docker build --no-cache --pull`

You should check that each Dockerfile extends FROM `tag:latest` or the latest version of the tag that you support. Its' not magical, you do have to work it.

Also, IMHO, Docker SHOULD NOT create another Linux distribution.

[-]

Tinycoin: A small, horrible cryptocurrency in Python for educational purposes

The 'dumbcoin' jupyter notebook is also a good reference: "Dumbcoin - An educational python implementation of a bitcoin-like blockchain" https://nbviewer.jupyter.org/github/julienr/ipynb_playground...

[-]

When does the concept of equilibrium work in economics?

"Modeling stock return distributions with a quantum harmonic oscillator" (2018) https://iopscience.iop.org/article/10.1209/0295-5075/120/380...

> We propose a quantum harmonic oscillator as a model for the market force which draws a stock return from short-run fluctuations to the long-run equilibrium. The stochastic equation governing our model is transformed into a Schrödinger equation, the solution of which features "quantized" eigenfunctions. Consequently, stock returns follow a mixed χ distribution, which describes Gaussian and non-Gaussian features. Analyzing the Financial Times Stock Exchange (FTSE) All Share Index, we demonstrate that our model outperforms traditional stochastic process models, e.g., the geometric Brownian motion and the Heston model, with smaller fitting errors and better goodness-of-fit statistics. In addition, making use of analogy, we provide an economic rationale of the physics concepts such as the eigenstate, eigenenergy, and angular frequency, which sheds light on the relationship between finance and econophysics literature.

"Quantum harmonic oscillator" https://en.wikipedia.org/wiki/Quantum_harmonic_oscillator

The QuantEcon lectures have a few different multiple agent models:

"Rational Expectations Equilibrium" https://lectures.quantecon.org/py/rational_expectations.html

"Markov Perfect Equilibrium" https://lectures.quantecon.org/py/markov_perf.html

"Robust Markov Perfect Equilibrium" https://lectures.quantecon.org/py/rob_markov_perf.html

"Competitive Equilibria of Chang Model" https://lectures.quantecon.org/py/chang_ramsey.html

... "Lectures in Quantitative Economics as Python and Julia Notebooks" https://news.ycombinator.com/item?id=19083479 (data sources (pandas-datareader, pandaSDMX), tools, latex2sympy)

"Econophysics" https://en.wikipedia.org/wiki/Econophysics

> Indeed, as shown by Bruna Ingrao and Giorgio Israel, general equilibrium theory in economics is based on the physical concept of mechanical equilibrium.

[-]

Simdjson – Parsing Gigabytes of JSON per Second

> Requirements: […] A processor with AVX2 (i.e., Intel processors starting with the Haswell microarchitecture released 2013, and processors from AMD starting with the Rizen)

[+]
[+]
[+]
[+]
[+]
[+]
[-]

A faster, more efficient cryptocurrency

[+]

Are there reasons that e.g. Bitcoin and Ethereum and Stellar could not implement some of these more performant approaches that Algorand [1] and Vault [2] have developed, published, and implemented? Which would require a hard fork?

[1] https://www.algorand.com/

[2] https://dspace.mit.edu/handle/1721.1/117821

[+]

And what of decentralized premined chains (with no PoW, no PoS, and far less energy use) that release coins with escrow smart contracts over time such as Ripple and Stellar (and close a new ledger every few seconds)?

> Algorand has a very fast consensus mechanism and can add blocks as quickly as the network can deliver them. We become a victim of our success. The blockchain will grow very rapidly. A terabyte a month is possible. The storage issue associated with our performance can quickly become an issue. The Vault paper is focused on solving this and other storage scaling problems.

What prevents a person from using a chain like IPFS?

Ethereum Casper PoS has been under review for quite some time.

Why isn't all Bitcoin on Lightning Network?

Bitcoin could make bootstrapping faster by choosing a considered-good blockhash and balances, but AFAIU, re-verifying transactions like Bitcoin and derivatives do prevents hash collision attacks that are currently considered infeasible for SHA-256 (especially given a low block size).

There was an analysis somewhere where they calculated the cloud server instance costs of mounting a ~51% attack (which applies to PoW chains) for various blockchains.

Bitcoin is not profitable to mine in places without heavily subsidized dirty/clean energy anymore: energy and Bitcoin commodity costs and prices have intersected. They'll need any of: inexpensive clean energy, more efficient chips, higher speculative value.

Energy arbitrage (grid-scale energy storage) may be more profitable now. We need energy storage in order to reach 100% renewable energy (regardless of floundering policy support).

[+]

People argue this all day. There's a lot of FUD.

Ripple only runs ~7% of validator nodes; which is far less centralized control than major Bitcoin mining pools and businesses (who do the deciding in regards to the many Bitcoin hard forks); that's one form of decentralization.

Ripple clients can use their own UNL or use the Ripple-approved UNL.

Ripple is traded on a number of exchanges (though fewer than Bitcoin for certain); that's another form of decentralization.

As an open standard, ILP will further reduce vendor lock in (and increase interoperability between) networks that choose to implement it.

There are forks of Ripple (e.g. Stellar) just like there are forks of Bitcoin and Ethereum.

From https://ripple.com/insights/the-inherently-decentralized-nat... :

> In contrast, the XRP Ledger requires 80 percent of validators on the entire network, over a two-week period, to continuously support a change before it is applied. Of the approximately 150 validators today, Ripple runs only 10. Unlike Bitcoin and Ethereum — where one miner could have 51 percent of the hashing power — each Ripple validator only has one vote in support of an exchange or ordering a transaction.

How does your definition of 'decentralized' differ?

[+]
[-]

Git-signatures – Multiple PGP signatures for your commits

[+]
[+]

> I think it is probably in the class of problems where there are no great foolproof solutions. However, I can imagine that techniques like certificate transparency (all signed x509 certificates pushed to a shared log) would be quite useful.

Securing DNS: "https://news.ycombinator.com/item?id=19181362"

> Certs on the Blockchain: "Can we merge Certificate Transparency with blockchain?" https://news.ycombinator.com/item?id=18961724

> Namecoin (decentralized blockchain DNS): https://en.wikipedia.org/wiki/Namecoin

[+]

My mistake. How ironic. Everything depends upon the red wheelbarrow. Here's that link without the trailing ": https://news.ycombinator.com/item?id=19181362

> My main problem with blockchain is the excessive energy consumption of PoW. I know there are PoS efforts, but they seem problematical.

One report said that 78% of Bitcoin energy usage is from renewable sources (many of which would otherwise be curtailed and otherwise unfunded due to flat-to-falling demand for electricity). But PoW really is expensive and hopefully the market will choose less energy-inefficient solutions from the existing and future blockchain solutions while keeping equal or better security assurances.

>> Proof of Work (Bitcoin, ...), Proof of Stake (Ethereum Casper), Proof of Space, Proof of Research (GridCoin, CureCoin,)

The spec should be: DDOS resiliant (without a SPOF), no one entity with control over API and/or database credentials and database backups and the clock, and immutable.

Immutability really cannot be ensured with hashed records that incorporate the previous record's hash as a salt in a blocking centralized database because someone ultimately has root and the clock and all the backups and code vulnerable to e.g. [No]SQL injection; though distributed 'replication' and detection of record modification could be implemented. git push -f may be detected if it's on an already-replicated branch; but git depends upon local timestamps. google/trillian does Merkle trees in a centralized database (for Certificate Transparency).

In quickly reading the git-signatures shell script sources, I wasn't certain whether the git-notes branch with the .gitsigners that are fetched from all n keyservers (with DNS) is also signed?

I also like the "Table 1: Security comparison of Log Based Approaches to Certificate Management" in the CertLedger paper. Others are far more qualified to compare implementations.

[+]

> I'd love if it could be rooted in a Yubikey.

FIDO2 and Yubico helped develop the new W3C WebAuthn standard: https://en.wikipedia.org/wiki/WebAuthn

But WebAuthn does not solve for WoT or PKI or certificate pinning.

> Decoupling the "signing" and "verifying" parts seem like a good idea. As random Person signs something, how someone else figures out how to go trust that signature is a separate problem.

Someone can probably help with terminology here. There's identification (proving that a person has the key AND that it's their key (biometrics, challenge-response)), signing (using a key to create a cryptographic signature – for the actual data or a reasonably secure cryptographic hash of said data – that could only could have been created with the given key), signature verification (checking that the signature was created by the claimed key for the given data), and then there's trusting that the given key is authorized for a specific purpose (Web of Trust (key-signing parties), PKI, ACME, exchange of symmetric keys over a different channel such as QKD) by e.g. signing a structured document that links cryptographic keys with keys for specific authorized functions and trusting the key(s) used to sign said authorizing document.

Private (e.g. Zero Knowledge) blockchains can be used for key exchange and key rotation. Public blockchains can be used for sharing (high-entropy) key components; also with an optional exchange of money to increase the cost of key compromise attempts.

There's also WKD: "Web Key Directory"; which hosts GPG keys over HTTPS from a .well-known URL for a given user@domain identifier: https://wiki.gnupg.org/WKD

Compared to existing PGP/GPG keyservers, WKD does rely upon HTTPS.

TUF is based on Thandy. TUF: "The Update Framework" does not presume channel security (is designed to withstand channel compromise) https://en.wikipedia.org/wiki/The_Update_Framework_(TUF)

The TUF spec doesn't mention PGP/GPG: https://github.com/theupdateframework/specification/blob/mas...

There's a derivative of TUF for automotive applications called Uptane: https://uptane.github.io

The Bitcoin article on multisignature; 1-of-2, 2-of-2, 2-of-3, 3-of-5, etc.: https://en.bitcoin.it/wiki/Multisignature

[-]

Compounding Knowledge

[+]
[+]
[+]
[+]

BTW, AQR funded the initial development of pandas; which now powers tools like alphalens (predictive factor analysis) and pyfolio.

There's your 'compounding knowledge'.

(Days later)

"7 Best Community-Built Value Investing Algorithms Using Fundamentals" https://blog.quantopian.com/fundamentals-contest-winners/

(The Zipline backtesting library also builds upon Pandas)

How can we factor ESG/sustainability reporting into these fundamentals-driven algorithms in order to save the world?

[+]
[+]

"The Superinvestors of Graham and Doddsville" (1984) https://scholar.google.com/scholar?cluster=17265410477248371...

From https://en.wikipedia.org/wiki/The_Superinvestors_of_Graham-a... :

> The speech and article challenged the idea that equity markets are efficient through a study of nine successful investment funds generating long-term returns above the market index.

This book probably doesn't mention that he's given away over 71% to charity since Y2K. Or that it's really cold and windy and snowy in Omaha; which makes for lots of reading time.

"Warren Buffett and the Interpretation of Financial Statements: The Search for the Company with a Durable Competitive Advantage" (2008) [1], "Buffetology" (1999) [2], and "The Intelligent Investor" (1949, 2009) [3] are more investment-strategy-focused texts.

[1] https://smile.amazon.com/Warren-Buffett-Interpretation-Finan...

[2] https://smile.amazon.com/Buffettology-Previously-Unexplained...

[3] https://smile.amazon.com/Intelligent-Investor-Definitive-Inv...

Value Investing: https://en.wikipedia.org/wiki/Value_investing https://www.investopedia.com/terms/v/valueinvesting.asp

> This is why it’s commonly telling you what happened, not why it happened or under what conditions it might happen again.

[-]

Why CISA Issued Our First Emergency Directive

There are a number of efforts to secure DNS (and SSL/TLS which generally depends upon DNS; and upon which DNS-over-HTTPS depends) and the identity proof systems which are used for record-change authentication and authorization.

Domain registrars can and SHOULD implement multi-factor authentication. https://en.wikipedia.org/wiki/Multi-factor_authentication

Are there domain registrars that support FIDO/U2F or the new W3C WebAuthn spec? https://en.wikipedia.org/wiki/WebAuthn

Credentials and blockchains (and biometrics): https://gist.github.com/westurner/4345987bb29fca700f52163c33...

DNSSEC: https://en.wikipedia.org/wiki/Domain_Name_System_Security_Ex...

ACME / LetsEncrypt certs expire after 3 months (*) and require various proofs of domain ownership: https://en.wikipedia.org/wiki/Automated_Certificate_Manageme...

Certificate Transparency: https://en.wikipedia.org/wiki/Certificate_Transparency

Certs on the Blockchain: "Can we merge Certificate Transparency with blockchain?" https://news.ycombinator.com/item?id=18961724

Namecoin (decentralized blockchain DNS): https://en.wikipedia.org/wiki/Namecoin

DNSCrypt: https://en.wikipedia.org/wiki/DNSCrypt

DNS over HTTPS: https://en.wikipedia.org/wiki/DNS_over_HTTPS

DNS over TLS: https://en.wikipedia.org/wiki/DNS_over_TLS

DNS: https://en.wikipedia.org/wiki/Domain_Name_System

[-]

Chrome will Soon Let You Share Links to a Specific Word or Sentence on a Page

[+]

"Integration with W3C Web Annotations" https://github.com/bokand/ScrollToTextFragment/issues/4

> It would be great to be able to comment on the linked resource text fragment. W3C Web Annotations [implementations] don't recognize the targetText parameter, so AFAIU comments are then added to the document#fragment and not the specified text fragment. [...]

> Is there a simplified mapping of W3C Web Annotations to URI fragment parameters?

[-]

Guidelines for keeping a laboratory notebook

[+]

> Computation related fields lend themselves well to purely electronic notebooks, no surprise. Today, a lot of my work fits perfectly in a Jupyter notebook.

Some notes and ideas regarding Jupyter notebooks as lab notebooks from "Keeping a Lab Notebook [pdf]": https://news.ycombinator.com/item?id=15710815

[-]

Superalgos and the Trading Singularity

Though others didn't, you might find this interesting: "Ask HN: Why would anyone share trading algorithms and compare by performance?" https://news.ycombinator.com/item?id=15802785 ( https://westurner.github.io/hnlog/#story-15802785 )

[+]

I think part of the value of sharing knowledge and algorithmic implementations comes from getting feedback from other experts; like peer review and open science and teaching.

Case in point: the first algorithm on this list [1] of community contributed algorithms that were migrated to their new platform is "minimum variance w/ constraint" [2]. Said algorithm showed returns of over 200% as compared with 77% returns from the SPY S&P 500 ETF over the same period, ceteris paribus. In the 69 replies, there are modifications by community members and the original author that exceed 300%.

Working together on open algorithms has positive returns that may exceed advantages of closed algorithmic development without peer review.

[1] https://www.quantopian.com/posts/community-algorithms-migrat...

[2] https://www.quantopian.com/posts/56b6021b3f3b36b519000924

[+]

> How well does it do in production though and what happens when multiple algos execute the same trades?

Price inflation.

> Does it cause the rest of the algos to adapt and change results?

Trading index ETFs? IDK

> It makes sense to back-test together and work on it, but if it's proven to work, someone will create something to monitor volume on those trades and work against it.

Why does it need to do lots of trades? Is it possible for anyone other than e.g. SEC to review trades by buyer or seller?

> I'd be curious to see the same algo do 300% in production, and if so, then my bias would be uncalled for.

pyfolio does tear sheets with Zipline algos: pyfolio/examples/zipline_algo_example.ipynb https://nbviewer.jupyter.org/github/quantopian/pyfolio/blob/...

alphalens does performance analysis of predictive factors: alphalens/examples/pyfolio_integration.ipynb https://nbviewer.jupyter.org/github/quantopian/alphalens/blo...

awesome-quant lists a bunch of other tools for algos and superalgos: https://github.com/wilsonfreitas/awesome-quant

What's a good platform for paper trading (with e.g. zipline or moonshot algorithms)?

[+]
[-]

Crunching 200 years of stock, bond, currency and commodity data

[+]
[+]
[+]

I was interested, so I did some research here.

Rational Choice Theory https://en.wikipedia.org/wiki/Rational_choice_theory

Rational Behavior https://www.investopedia.com/terms/r/rational-behavior.asp

> Most mainstream academic economics theories are based on rational choice theory.

> While most conventional economic theories assume rational behavior on the part of consumers and investors, behavioral finance is a field of study that substitutes the idea of “normal” people for perfectly rational ones. It allows for issues of psychology and emotion to enter the equation, understanding that these factors alter the actions of investors, and can lead to decisions that may not appear to be entirely rational or logical in nature. This can include making decisions based primarily on emotion, such as investing in a company for which the investor has positive feelings, even if financial models suggest the investment is not wise.

Behavioral finance https://www.investopedia.com/terms/b/behavioralfinance.asp

Bounded rationality > Relationship to behavioral economics https://en.wikipedia.org/wiki/Bounded_rationality

Perfectly rational decisions can be and are made without perfect information; bounded by the information available at the time. If we all had perfect information, there would be no entropy and no advantage; just lag and delay between credible reports and order entry.

Information asymmetry https://en.wikipedia.org/wiki/Information_asymmetry

Heed these words wisely: What foolish games! Always breaking my heart.

https://deepmind.com/blog/game-theory-insights-asymmetric-mu...

> Asymmetric games also naturally model certain real-world scenarios such as automated auctions where buyers and sellers operate with different motivations. Our results give us new insights into these situations and reveal a surprisingly simple way to analyse them. While our interest is in how this theory applies to the interaction of multiple AI systems, we believe the results could also be of use in economics, evolutionary biology and empirical game theory among others.

https://en.wikipedia.org/wiki/Pareto_efficiency

> A Pareto improvement is a change to a different allocation that makes at least one individual or preference criterion better off without making any other individual or preference criterion worse off, given a certain initial allocation of goods among a set of individuals. An allocation is defined as "Pareto efficient" or "Pareto optimal" when no further Pareto improvements can be made, in which case we are assumed to have reached Pareto optimality.

Which, I think, brings me to equitable availability of maximum superalgo efficiency and limits of real value creation in capital and commodities markets; which'll have to be a topic for a different day.

[-]

Show HN: React-Schemaorg: Strongly-Typed Schema.org JSON-LD for React

[+]

https://dev.to/eyassh/react-schemaorg-strongly-typed-schemao...

Is there a good way to generate JSONschema and thus forms from schema.org RDFS classes and (nested, repeatable) properties?

[+]

There are a number of tools for generating forms and requisite client and serverside data validations from JSONschema; but I'm not aware of any for RDFS (and thus the schema.org schema [1]). A different use case, for certain.

https://schema.org/docs/developers.html#defs

[+]
[-]

Consumer Protection Bureau Aims to Roll Back Rules for Payday Lending

From the article:

> The way payday loans work is that payday lenders typically offer small loans to borrowers who promise to pay the loans back by their next paycheck. Interest on the loans can have an annual percentage rate of 390 percent or more, according to a 2013 report by the CFPB. Another bureau report from the following year found that most payday loans — as many as 80 percent — are rolled over into another loan within two weeks. Borrowers often take out eight or more loans a year.

390%

From https://www.npr.org/2019/02/06/691944789/consumer-protection... :

> TARP recovered funds totalling $441.7 billion from $426.4 billion invested, earning a $15.3 billion profit or an annualized rate of return of 0.6% and perhaps a loss when adjusted for inflation.[2][3]

0.6%

[+]
[+]
[+]

Lectures in Quantitative Economics as Python and Julia Notebooks

[+]
[+]
[+]
[+]

You can build something like this with Jupyter today.

> Traitlets is a framework that lets Python classes have attributes with type checking, dynamically calculated default values, and ‘on change’ callbacks. https://traitlets.readthedocs.io/en/stable/

> Traitlet events. Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the observe method of the widget can be used to register a callback https://ipywidgets.readthedocs.io/en/stable/examples/Widget%...

You can definitely build interactive notebooks with Jupyter Notebook and JupyterLab (and ipywidgets or Altair or HoloViews and Bokeh or Plotly for interactive data visualization).

> Qgrid is a Jupyter notebook widget which uses SlickGrid to render pandas DataFrames within a Jupyter notebook. This allows you to explore your DataFrames with intuitive scrolling, sorting, and filtering controls, as well as edit your DataFrames by double clicking cells. https://github.com/quantopian/qgrid

Qgrid's API includes event handler registration: https://qgrid.readthedocs.io/en/latest/

> neuron is a robust application that seamlessly combines the power of Visual Studio Code with the interactivity of Jupyter Notebook. https://marketplace.visualstudio.com/items?itemName=neuron.n...

"Excel team considering Python as scripting language: asking for feedback" (2017) https://news.ycombinator.com/item?id=15927132

OpenOffice Calc ships with Python 2.7 support: https://wiki.openoffice.org/wiki/Python

Procedural scripts written in a general purpose language with named variables (with no UI input except for chart design and persisted parameter changes) are reproducible.

What's a good way to review all of the formulas and VBA and/or Python and data ETL in a spreadsheet?

Is there a way to record a reproducible data transformation script from a sequence of GUI interactions in e.g. OpenRefine or similar?

OpenRefine/OpenRefine/wiki/Jupyter

"Within the Python context, a Python OpenRefine client allows a user to script interactions within a Jupyter notebook against an OpenRefine application instance, essentially as a headless service (although workflows are possible where both notebook-scripted and live interactions take place. https://github.com/OpenRefine/OpenRefine/wiki/Jupyter

Are there data wrangling workflows that are supported by OpenRefine but not Pandas, Dask, or Vaex?

[+]

There are undergraduate and graduate courses in each language:

Python version: https://lectures.quantecon.org/py/

Julia version: https://lectures.quantecon.org/jl/

[+]

pandas-datareader can pull data from e.g. FRED, Eurostat, Quandl, World Bank: https://pandas-datareader.readthedocs.io/en/latest/remote_da...

pandaSDMX can pull SDMX data from e.g. ECB, Eurostat, ILO, IMF, OECD, UNSD, UNESCO, World Bank; with requests-cache for caching data requests: https://pandasdmx.readthedocs.io/en/latest/#supported-data-p...

The scikit-learn estimator interface includes a .score() method. "3.3. Model evaluation: quantifying the quality of predictions" https://scikit-learn.org/stable/modules/model_evaluation.htm...

statsmodels also has various functions for statistically testing models: https://www.statsmodels.org/stable/

"latex2sympy parses LaTeX math expressions and converts it into the equivalent SymPy form" and is now merged into SymPy master and callable with sympy.parsing.latex.parse_latex(). It requires antlr-python-runtime to be installed. https://github.com/augustt198/latex2sympy https://github.com/sympy/sympy/pull/13706

IDK what Julia has for economic data retrieval and model scoring / cost functions?

[-]

If Software Is Funded from a Public Source, Its Code Should Be Open Source

From the US Digital Services Playbook [1]:

> PLAY 13

> Default to open

> When we collaborate in the open and publish our data publicly, we can improve Government together. By building services more openly and publishing open data, we simplify the public’s access to government services and information, allow the public to contribute easily, and enable reuse by entrepreneurs, nonprofits, other agencies, and the public.

> Checklist

> - Offer users a mechanism to report bugs and issues, and be responsive to these reports

> [...]

> - Ensure that we maintain contractual rights to all custom software developed by third parties in a manner that is publishable and reusable at no cost

> [...]

> - When appropriate, publish source code of projects or components online

> [...]

> Key Questions

> [...]

> - If the codebase has not been released under an open source license, explain why.

> - What components are made available to the public as open source?

> [...]

[1] https://playbook.cio.gov/#play13

Apache Arrow 0.12.0

> Apache Arrow is a cross-language development platform for in-memory data. It specifies a standardized language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware. It also provides computational libraries and zero-copy streaming messaging and interprocess communication. Languages currently supported include C, C++, C#, Go, Java, JavaScript, MATLAB, Python, R, Ruby, and Rust.

Statement on Status of the Consolidated Audit Trail (2018)

> Put simply, the CAT is intended to enable regulators to oversee the securities markets on a consolidated basis—and in so doing, better protect these markets and investors.

[-]

Post Quantum Crypto Standardization Process – Second Round Candidates Announced

> As the latest step in its program to develop effective defenses, the National Institute of Standards and Technology (NIST) has winnowed the group of potential encryption tools—known as cryptographic algorithms—down to a bracket of 26. These algorithms are the ones NIST mathematicians and computer scientists consider to be the strongest candidates submitted to its Post-Quantum Cryptography Standardization project, whose goal is to create a set of standards for protecting electronic information from attack by the computers of both tomorrow and today.

> “These 26 algorithms are the ones we are considering for potential standardization, and for the next 12 months we are requesting that the cryptography community focus on analyzing their performance,”

Links to the 17 public-key encryption and key-establishment algorithms and 9 digital signature algorithms are here: "Round 2 Submissions" https://csrc.nist.gov/Projects/Post-Quantum-Cryptography/Rou...

"Quantum Algorithm Zoo" has moved to https://quantumalgorithmzoo.org .

[-]

Ask HN: How do you evaluate security of OSS before importing?

What tools can I use to evaluate the security posture of an OSS project before I approve its usage with high confidence?

Oddly, whether a project has at least one CVE reported could be interpreted in favor of the project. https://www.cvedetails.com

Do they have a security disclosure policy? A dedicated security mailing list?

Do they pay bounties or participate in e.g Pwn2own?

Do they cryptographically sign releases?

Do they cryptographically sign VCS tags (~releases)? commits? `git tag -s` / `git commit/merge -S` https://git-scm.com/book/en/v2/Git-Tools-Signing-Your-Work

Downstream packagers do sometimes/often apply additional patches and then sign their release with the repo (and thus system global) GPG key.

Whether they require "Signed-off-by" may indicate that the project has mature controls and possibly a formal code review process requirement. (Look for "Signed-off-by:" in the release branch (`git commit/merge -s/--signoff`)

How have they integrated security review into their [iterative] release workflow?

Is the software formally verified? Are parts of the software implementation or spec formally verified?

Does the system trust the channel? The host? Is it a 'trustless' system?

What are the single points of failure?

How is logging configured? To syslog?

Do they run the app as root in a Docker container? Does it require privileged containers?

If it has to run as root, does it drop privileges at startup?

Does the package have an SELinux or AppArmor policy? (Or does it say e.g. "just set SELinux to permissive mode)

Is there someone you can pay to support the software in an enterprise environment? Open or closed, such contacts basically never accept liability; but if there is an SLA, do you get a pro-rated bill?

As far as indicators of actual software quality:

How much test coverage is there? Line coverage or statement coverage?

Do they run static analysis tools for all pull requests and releases? Dynamic analysis? Fuzzing?

Of course, closed or open source projects may do none or all of these and still be totally secure, insecure, or unsecure.

[+]
[-]

Ask HN: How can I use my programming skills to support nonprofit organizations?

Lately I've been thinking about doing programming for nonprofits, both because I want to help out with what I'm good at but also to hone my skills and potentially get some open source credit.

So far I've had a hard time finding nonprofit projects where I can just pick up something and start programming. I know about freecodecamp.org, but they force you to go through their courses, and as I already have multiple years of experience as a developer, I feel like that would be a waste of time.

Isn't there a way to contribute to nonprofit organization in a more direct and simple manner like how you would contribute to an open source project on GitHub?

There are lots of project management systems with issue tracking and kanban boards with swimlanes. Because it's unreasonable to expect all volunteers to have a GH account or even understand what GH is for, support for external identity management and SSO may be essential to getting people to actually log in and change their password regularly.

Sidling a nonprofit with custom built software with no other maintainers is not what they need. Build (and pay for development, maintenance, timely security upgrades and security review) or Buy (where is our data? who backs it up? how much does it cost for a month or a few years? Is it open source with a hosted option; so that we can pay a developer to add or fix what we need?)

"Solutions architect" may be a more helpful objective title for what's needed. https://en.wikipedia.org/wiki/Solution_architecture

What are their needs? Marketing, accounting, operations, HR

Marketing: web site, maps service, directions, active social media presence that speaks to their defined audience

Accounting: Revenue and expenses, payroll/benefits/HR, projections, "How can we afford to do more?", handle donations and send receipts for tax purposes, reports to e.g. https://charitynavigator.org/ and infographics for wealth-savvy donors

Operations: Asset inventory, project management, volunteer scheduling

HR: payroll, benefits, volunteer scheduling, training, turnover, retaining selfless and enlightenedly-self-interested volunteers

Create a spreadsheet. Rows: needs/features/business processes. Columns: essential, nice to have, software products and services.

Create another spreadsheet. Rows: APIs. Columns: APIs.

Training: what are the [information systems] processes/workflows/checklists? How can I suggest a change? How do we reach consensus that there's a better way to do this? Is there a wiki? Is there a Q&A system?

"How much did you sink on that? Probably seemed like the best option according to the information available at the time, huh? Do you have a formal systems acquisition process? Who votes according to what type of prepared analysis? How much would it cost to switch? What do we need to do to ETL (extract, transform, and load) into a newer better system?"

When estimating TCO for a nonprofit, turnover is a very real consideration. People move. Chances are, as with most organizations TBH, there's a patchwork of partially-integrated and maybe-integrable systems that it may or may not be more cost-effective and maintainable to replace with a cloud ERP specifically designed for nonprofits.

Who has access rights to manually update which parts of the website? How can we include dynamic ([other] database-backed) content in our website? What is a CMS? What is an ERP? What is a CRM? Are these customers, constituents, or both? When did we last speak with those guys? How can people share our asks with social media networks?

If you're not willing or able to make a long-term commitment, the more responsible thing to do is probably to disclose any conflicts of interest recommend a SaaS solution hosted in a compliant data center.

q="nonprofit erp"

q="nonprofit crm"

q="nonprofit cms" + donation campaign visibility

What time of day are social media posts most likely to get maximum engagement from which segments of our audience? What is our ~ARPU "average revenue per user/follower"?

... As a volunteer and not a FTE, it may be a worthwhile exercise to build a prototype of the new functionality with whatever tools you happen to be familiar with with the expectation that they'll figure out a way to accomplish the same objectives with their existing systems. If that's not possible, there may be a business opportunity: are there other organizations with the same need? Is there a sustainable market for such a solution? You may be building to be acquired.

[-]

Ask HN: Steps to forming a company?

Hey guys, I'm leaving my firm very shortly to form a startup.

Does why have a checklist of proper ways to do things?

Ie. 1. Form Chapter C Delaware company with Clerky 2. Hire payroll company x 3. use this company for patents.

any info there?

From "Ask HN: What are your favorite entrepreneurship resources" https://news.ycombinator.com/item?id=15021659 :

> USA Small Business Administration: "10 steps to start your business." https://www.sba.gov/starting-business/how-start-business/10-...

> "Startup Incorporation Checklist: How to bootstrap a Delaware C-corp (or S-corp) with employee(s) in California" https://github.com/leonar15/startup-checklist

> FounderKit has reviews for Products, Services, and Software for founders: https://founderkit.com

... I've heard good things about Gusto for payroll, HR, and benefits through Guideline: https://gusto.com/product/pricing

[-]

A Self-Learning, Modern Computer Science Curriculum

Outstanding resource.

jwasham/coding-interview-university also links to a number of also helpful OER resources: https://github.com/jwasham/coding-interview-university

[-]

MVP Spec

> The criticism of the MVP approach has led to several new approaches, e.g. the Minimum Viable Experiment MVE[19] or the Minimum Awesome Product MAP[20].

https://en.wikipedia.org/wiki/Minimum_viable_product#Critici...

[-]

Can we merge Certificate Transparency with blockchain?

From "REMME – A blockchain-based protocol for issuing X.509 client certificates" https://news.ycombinator.com/item?id=18868540 :

""" In no particular order, there are a number of blockchain PKI (and DNS (!)) proposals and proofs of concept.

"CertLedger: A New PKI Model with Certificate Transparency Based on Blockchain" (2018) https://arxiv.org/pdf/1806.03914 https://scholar.google.com/scholar?q=related:LF9PMeqNOLsJ:sc...

"TABLE 1: Security comparison of Log Based Approaches to Certificate Management" (p.12) lists a number of criteria for blockchain-based PKI implementations:

- Resilient to split-world/MITM attack

- Provides revocation transparency

- Eliminates client certificate validation process

- Eliminates trusted key management

- Preserves client privacy

- Require external auditing

- Monitoring promptness

... These papers also clarify why a highly-replicated decentralized trustless datastore — such as a blockchain — is advantageous for PKI. WoT is not mentioned.

"Blockchain-based Certificate Transparency and Revocation Transparency" (2018) https://fc18.ifca.ai/bitcoin/papers/bitcoin18-final29.pdf

https://scholar.google.com/scholar?q=related:oEsKmJvdn-MJ:sc...

Who can update and revoke which records in a permissioned blockchain (or a plain old database, for that matter)?

Letsencrypt has a model for proving domain control with ACME; which AFAIU depends upon DNS, too. """

TLA references "Certificate Transparency Using Blockchain" (2018) https://eprint.iacr.org/2018/1232.pdf https://scholar.google.com/scholar?q="Certificate+Transparen...

[+]

> The main issue isn't the support and maintenance of a such distributed network,

Running a permissioned blockchain is nontrivial. "Just fork XYZ and call it a day" doesn't quite describe the amount of work involved. There's read latency at scale. There's merging things to maintain vendor strings,

> but its integration with current solutions

- Verify issuee identity

- Update (domain/CN/subjectAltName, date) index

- Update cached cert and CRL bundles

- Propagate changes to all clients

> and avoiding centralized middleware services that will weaken the schema described in the documents.

Eventually, a CDN will look desireable. IPFS may fit the bill, IDK?

[+]

google/trillian https://github.com/google/trillian

> Trillian is an implementation of the concepts described in the Verifiable Data Structures white paper, which in turn is an extension and generalisation of the ideas which underpin Certificate Transparency.

> Trillian implements a Merkle tree whose contents are served from a data storage layer, to allow scalability to extremely large trees.

[-]

Why Don't People Use Formal Methods?

Which universities teach formal methods?

- q=formal+verification https://www.class-central.com/search?q=formal+verification

- q=formal-methods https://www.class-central.com/search?q=formal+methods

Is formal verification a required course or curriculum competency for any Computer Science or Software Engineering / Computer Engineering degree programs?

Is there a certification for formal methods? Something like for engineer-status in other industries?

What are some examples of tools and [OER] resources for teaching and learning formal methods?

- JsCoq

- Jupyter kernel for Coq + nbgrader

- "Inconsistencies, rolling back edits, and keeping track of the document's global state" https://github.com/jupyter/jupyter/issues/333 (jsCoq + hott [+ IJavascript Jupyter kernel], STLC: Simply-Typed Lambda Calculus)

- TDD tests that run FV tools on the spec and the implementation

What are some examples of open source tools for formal verification (that can be integrated with CI to verify the spec AND the implementation)?

What are some examples of formally-proven open source projects?

- "Quark : A Web Browser with a Formally Verified Kernel" (2012) (Coq, Haskell) http://goto.ucsd.edu/quark/

What are some examples of projects using narrow and strong AI to generate perfectly verified software from bad specs that make the customers and stakeholders happy?

From reading though comments here, people don't use formal methods because: cost-prohibitive, inflexibile, perceived as incompatible with agile / iterative methods that are more likely to keep customers who don't know what formal methods are happy, lack of industry-appropriate regulation, and cognitive burden of often-incompatible shorthand notations.

[+]
[-]

Steps to a clean dataset with Pandas

To add to the three points in the article:

Data quality https://en.wikipedia.org/wiki/Data_quality

Imputation https://en.wikipedia.org/wiki/Imputation_(statistics)

Feature selection https://en.wikipedia.org/wiki/Feature_selection

datacleaner can drop NaNs, do imputation with "the mode (for categorical variables) or median (for continuous variables) on a column-by-column basis", and encode "non-numerical variables (e.g., categorical variables with strings) with numerical equivalents" with Pandas DataFrames and scikit-learn. https://github.com/rhiever/datacleaner

sklearn-pandas "[maps] DataFrame columns to transformations, which are later recombined into features", and provides "A couple of special transformers that work well with pandas inputs: CategoricalImputer and FunctionTransformer" https://github.com/scikit-learn-contrib/sklearn-pandas

Featuretools https://github.com/Featuretools/featuretools

> Featuretools is a python library for automated feature engineering. [using DFS: Deep Feature Synthesis]

auto-sklearn does feature selection (with e.g. PCA) in a "preprocessing" step; as well as "One-Hot encoding of categorical features, imputation of missing values and the normalization of features or samples" https://automl.github.io/auto-sklearn/master/manual.html#tur...

auto_ml uses "Deep Learning [with Keras and TensorFlow] to learn features for us, and Gradient Boosting [with XGBoost] to turn those features into accurate predictions" https://auto-ml.readthedocs.io/en/latest/deep_learning.html#...

[-]

Reahl – A Python-only web framework

kim0 | 2019-01-19 19:38:48 | 165 | # | ^
[+]
[+]

Before GWT, there was Wt framework (C++); and then JWt (Java), which do the server and clientsides (with widgets in a tree).

Wt: https://en.wikipedia.org/wiki/Wt_(web_toolkit)

JWt: https://en.m.wikipedia.org/wiki/JWt_(Java_web_toolkit)

GWT: https://en.wikipedia.org/wiki/Google_Web_Toolkit

Now we have Babel, ES YYYY, and faster browser release cycles.

[+]
[-]

Ask HN: How can you save money while living on poverty level?

I freelance remotely, making roughly $1200 a month as a programmer because I only work 10 hours maximum each week (limited by my contract). I share the apartment with my mom, and It's a section 8 so our rent contributions are based on the income we make. My contribution towards rent is $400 a month.

Although I make more money than my mom (she's of retirement age and only works 1-2 days a week), while I'm looking for more work I want to figure out how to move out and live more independently on only $1200 a month.

I need to live frugally and want to know what I can cut more easily. I own a used car (already paid in full), and pay my own car insurance, electricity, phone and internet. After all that I have about $400 left each month which can be eaten up by going out or some emergency funds.

More recently I had to pay for my new city parking sticker so that's $100 more in expenses this particular month. I would be satisfied just living in a far off town paying the same $400 a month, I feel my dollars would stretch further since I now get 100% more privacy for the same price.

On top of that this job is a contract job so I need to put money aside to pay my own taxes. This $1200 is basically living on poverty level. Any ideas to make saving work? Is it very possible for people in the US to still save while on poverty?

That's not a living wage (or a full time job). There are lots of job search sites.

Spending some time on a good resume / CV / portfolio would probably be a good investment with positive ROI.

Is there a nonprofit that you could volunteer with to increase your hireability during the other 158 hours of the week?

Or an online course with a credential that may or may not have positive ROI as a resume item?

Is there a code school in your city with a "you don't pay unless you land a full time job with a living wage and benefits" guarantee?

What is your strategy for business and career networking?

From https://westurner.github.io/hnlog/#comment-17894632 :

> Personal Finance (budgets, interest, growth, inflation, retirement)

Personal Finance https://en.wikipedia.org/wiki/Personal_finance

Khan Academy > College, careers, and more > Personal finance https://www.khanacademy.org/college-careers-more/personal-fi...

"CS 007: Personal Finance For Engineers" https://cs007.blog

https://reddit.com/r/personalfinance/wiki

[-]

Show HN: Generate dank mnemonic seed phrases in the terminal

From https://github.com/lukechilds/doge-seed :

> The first four words will be a randomly generated Doge-like sentence.

The seed phrases are fully valid checksummed BIP39 seeds. They can be used with any cryptocurrency and can be imported into any BIP39 compliant wallet.

> […] However there is a slight reduction in entropy due to the introduction of the doge-isms. A doge seed has about 19.415 fewer bits of entropy than a standard BIP39 seed of equivalent length.

[-]

Can you sign a quantum state?

> Abstract. Cryptography with quantum states exhibits a number of surprising and counterintuitive features. In a 2002 work, Barnum et al. argued informally that these strange features should imply that digital signatures for quantum states are impossible [6].

> In this work, we perform the first rigorous study of the problem of signing quantum states. We first show that the intuition of [6] was correct, by proving an impossibility result which rules out even very weak forms of signing quantum states. Essentially, we show that any non-trivial combination of correctness and security requirements results in negligible security.

> This rules out all quantum signature schemes except those which simply measure the state and then sign the outcome using a classical scheme. In other words, only classical signature schemes exist.

> We then show a positive result: it is possible to sign quantum states, provided that they are also encrypted with the public key of the intended recipient. Following classical nomenclature, we call this notion quantum signcryption. Classically, signcryption is only interesting if it provides superior efficiency to simultaneous encryption and signing. Our results imply that, quantumly, it is far more interesting: by the laws of quantum mechanics, it is the only signing method available.

> We develop security definitions for quantum signcryption, ranging from a simple one-time two-user setting, to a chosen-ciphertext-secure many-time multi-user setting. We also give secure constructions based on post-quantum public-key primitives. Along the way, we show that a natural hybrid method of combining classical and quantum schemes can be used to “upgrade” a secure classical scheme to the fully-quantum setting, in a wide range of cryptographic settings including signcryption, authenticated encryption, and chosen-ciphertext security.

"Quantum signcryption"

[-]

Lattice Attacks Against Weak ECDSA Signatures in Cryptocurrencies [pdf]

[+]

> Countermeasures. All of the attacks we discuss in this paper can be prevented by using deterministic ECDSA nonce generation [29], which is already implemented in the default Bitcoin and Ethereum libraries.

[-]

REMME – A blockchain-based protocol for issuing X.509 client certificates

[+]
[+]

In no particular order, there are a number of blockchain PKI (and DNS (!)) proposals and proofs of concept.

"CertLedger: A New PKI Model with Certificate Transparency Based on Blockchain" (2018) https://arxiv.org/pdf/1806.03914 https://scholar.google.com/scholar?q=related:LF9PMeqNOLsJ:sc...

"TABLE 1: Security comparison of Log Based Approaches to Certificate Management" (p.12) lists a number of criteria for blockchain-based PKI implementations:

- Resilient to split-world/MITM attack

- Provides revocation transparency

- Eliminates client certificate validation process

- Eliminates trusted key management

- Preserves client privacy

- Require external auditing

- Monitoring promptness

... These papers also clarify why a highly-replicated decentralized trustless datastore — such as a blockchain — is advantageous for PKI. WoT is not mentioned.

"Blockchain-based Certificate Transparency and Revocation Transparency" (2018) https://fc18.ifca.ai/bitcoin/papers/bitcoin18-final29.pdf

https://scholar.google.com/scholar?q=related:oEsKmJvdn-MJ:sc...

Who can update and revoke which records in a permissioned blockchain (or a plain old database, for that matter)?

Letsencrypt has a model for proving domain control with ACME; which AFAIU depends upon DNS, too.

[-]

California grid data is live – solar developers take note

> It looks like California is at least two generations of technology ahead of other states. Let’s hope the rest of us catch up, so that we have a grid that can make an asset out of every building, every battery, and every solar system.

+1. Are there any other states with similar grid data available for optimization; or any plans to require or voluntarily offer such a useful capability?

[-]

Why attend predatory colleges in the US?

> Why would people attend predatory colleges?

Why would people make an investment with insufficient ROI (Return on Investment)?

Insufficient information.

College Scorecard [1] is a database with a web interface for finding and comparing schools according to a number of objective criteria. CollegeScorecard launched in 2015. It lists "Average Annual Cost", "Graduation Rate", and "Salary After Attending" on the search results pages. When you review a detail page for an institution, there are many additional statistics; things like: "Typical Total Debt After Graduation" and "Typical Monthly Loan Payment".

The raw data behind CollegeScorecard can be downloaded from [2]. The "data_dictionary" tab of the "Data Dictionary" spreadsheet describes the data schema.

[1] https://collegescorecard.ed.gov

[2] https://collegescorecard.ed.gov/data/

Khan Academy > "College, careers, and more" [3] may be a helpful supplement for funding a full-time college admissions counselor in a secondary education institution

[3] https://www.khanacademy.org/college-careers-more

(I haven't the time to earn 10 academia.stackexchange points in order to earn the prestigious opportunity to contribute this answer to such a forum with threaded comments. In the academic journal system, journals sell academics' work (i.e. schema.org/ScholarlyArticle PDFs, mobile-compatible responsive HTML 5, RDFa, JSON-LD structured data) and keep all of the revenue).

"Because I need money for school! Next question. CPU: College Textbook costs and CPI: All over time t?!"

[-]

Ask HN: Data analysis workflow?

What kind of workflow do you employ when designing a data-flow or analyzing data?

Let me give a concrete example. For the past year, I have been selling stuff on the interwebs through two payment processors one of them being PayPal.

The selling process was put together with a bunch of SaaS hooking everything together through webhooks and notifications.

Now I need to step it that control and produce a proper flow to handle sign up, subscription and payment.

Before doing that I'm analyzing and trying to conciliate all transactions to make sure the books are OK and nothing went unseen. There lies the problem. I have data coming from different sources such as databases, excel files, CSV exports and some JSON files.

At first, I started dealing with it by having all the data in CSV files and trying to make sense of them using code and running queries within the code.

As I found holes in the data I had to dig up more data from different sources and it became a pain to continue with code. I now imported everything into Postgres and have been "debugging" with SQL.

As I advanced through the process I had to generate a lot of routines to collect and match data. I also have to keep all the data files around and organized which is very hard to do because I'm all over the place trying to find where the problem is.

How do you handle with it? What kind of workflow? Any best practices or recommendations from people who do this for a living?

Pachyderm may be basically what you're looking for. It does data version control with/for language-agnostic pipelines that don't need to always redo the ETL phase. https://www.pachyderm.io

Dask-ML works with {scikit-learn, xgboost, tensorflow, TPOT,}. ETL is your responsibility. Loading things into parquet format affords a lot of flexibility in terms of (non-SQL) datastores or just efficiently packed files on disk that need to be paged into/over in RAM. http://ml.dask.org/examples/scale-scikit-learn.html

Sklearn.pipeline.Pipeline API: {fit(), transform(), predict(), score(),} https://scikit-learn.org/stable/modules/generated/sklearn.pi...

https://docs.featuretools.com can also minimize ad-hoc boilerplate ETL / feature engineering :

> Featuretools is a framework to perform automated feature engineering. It excels at transforming temporal and relational datasets into feature matrices for machine learning.

The PLoS 10 Simple Rules papers distill a number of best practices:

"Ten Simple Rules for Reproducible Computational Research" http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fj...

“Ten Simple Rules for Creating a Good Data Management Plan” http://journals.plos.org/ploscompbiol/article?id=10.1371/jou...

In terms of the scientific method, a null hypothesis like "there is no significant relation between the [independent and dependent] variables" may be dangerously unprofessional p-hacking and data dredging; and may result in an overfit model that seems to predict or classify the training and test data (when split with e.g. sklearn.model_selection.train_test_split and a given random seed).

One of these days (in the happy new year!) I'll get around to updating these notes with the aforementioned tools and docs: https://wrdrd.github.io/docs/consulting/data-science#scienti...

IDK what https://kaggle.com/learn has specifically in terms of analysis workflow? Their docker containers have very many tools configured in a reproducible way: https://github.com/Kaggle/docker-python/blob/master/Dockerfi...

[-]

Ask HN: What is your favorite open-source job scheduler

Too many business scripts rely on cron(8) to run. Classic cron cannot handle task duration, fail (only with email), same-task piling, linting, ...

So what is your favorite open-source, easy to bundle/deploy job scheduler, that is easy to use, has logging capacity, config file linting, and can handle common use-cases : kill if longer than, limit resources, prevent launching when previous one is nor finished, ...

systemd-crontab-generator may be usable for something like linting classic crontabs? https://github.com/systemd-cron/systemd-cron

Systemd/Timers as a cron replacement: https://wiki.archlinux.org/index.php/Systemd/Timers#As_a_cro...

Celery supports periodic tasks:

> Like with cron, the tasks may overlap if the first task doesn’t complete before the next. If that’s a concern you should use a locking strategy to ensure only one instance can run at a time (see for example Ensuring a task is only executed one at a time).

http://docs.celeryproject.org/en/latest/userguide/periodic-t...

[-]

How to Version-Control Jupyter Notebooks

tosh | 2018-12-22 06:53:46 | 164 | # | ^

Mentioned in the article: manual nbconvert, nbdime, ReviewNB (currently GitHub only), jupytext.

Jupytext includes a bit of YAML in the e.g. Python/R/Julia/Markdown header. https://github.com/mwouts/jupytext

[+]
[+]

Teaching and Learning with Jupyter (A book by Jupyter for Education)

[+]
[+]
[+]
[-]

Margin Notes: Automatic code documentation with recorded examples from runtime

[+]
[+]
[+]

1. sys.settrace() for {call, return, exception, c_call, c_return, and c_exception}

2. Serialize as/to doctests. Is there a good way to serialize Python objects as Python code?

3. Add doctests to callables' docstrings with AST

Mutation testing tools may have already implemented serialization to doctests but IDK about docstring modification.

... MOSES is an evolutionary algorithm that mutates and simplifies a combo tree until it has built a function with less error for the given input/output pairs.

[-]

Time to break academic publishing's stranglehold on research

[+]
[+]
[+]
[+]
[+]

https://hypothes.is supports threaded comments on anything with a URI; including PDFs and specific sentences or figures thereof. All you have to do is register an account and install the browser extension or include the JS in the HTML.

It's based on open standards and an open platform.

W3C Web Annotations: http://w3.org/annotation

About Hypothesis: https://web.hypothes.is/about/

[-]

Ask HN: How can I learn to read mathematical notation?

There are a lot of fields I'm interested in, such as machine learning, but I struggle to understand how they work as most resources I come across are full of complex mathematical notation that I never learned how to read in school or University.

How do you learn to read this stuff? I'm frequently stumped by an academic paper or book that I just can't understand due to mathematical notation that I simply cannot read.

[+]

There are a number of Wikipedia pages which catalog various uses of symbols for various disciplines:

Outline_of_mathematics#Mathematical_notation https://en.wikipedia.org/wiki/Outline_of_mathematics#Mathema...

List_of_mathematical_symbols https://en.wikipedia.org/wiki/List_of_mathematical_symbols

List_of_mathematical_symbols_by_subject https://en.wikipedia.org/wiki/List_of_mathematical_symbols_b...

Greek_letters_used_in_mathematics,_science,_and_engineering https://en.wikipedia.org/wiki/Greek_letters_used_in_mathemat...

Latin_letters_used_in_mathematics https://en.wikipedia.org/wiki/Latin_letters_used_in_mathemat...

For learning the names of symbols (and maybe also their meaning as conventially utilized in a particular field at a particular time in history), spaced repetition with flashcards with a tool like Anki may be helpful.

For typesetting, e.g. Jupyter Notebook uses MathJax to render LaTeX with JS.

latex2sympy may also be helpful for learning notation.

… data-science#mathematical-notation https://wrdrd.github.io/docs/consulting/data-science#mathema...

[-]

New law lets you defer capital gains taxes by investing in opportunity zones

[+]
[+]
[+]

> Is it just capital gains? Wondering if it applies to any other forms of active or passive income.

I would also like some information about this.

+1 for investing in distressed areas; self-nominated with intent or otherwise.

If it's capital gains only, -1 on requiring sale of capital assets in order to be sufficiently incentivized. (Because then the opportunity to tax-advantagedly invest in Opportunity Zones is denied to persons without assets to liquidate; i.e. unequal opportunity).

Q: "Why don't I get the same tax-advantage for investing in a/my opportunity zone community?"

A [AFAIU]: "Because you don't have capital gains; only regular income" (~="Because you're not an accredited investor")

[-]

How to Write a Technical Paper [pdf]

[+]

5 paragraph essay? https://en.wikipedia.org/wiki/Five-paragraph_essay

> The five-paragraph essay is a format of essay having five paragraphs: one introductory paragraph, three body paragraphs with support and development, and one concluding paragraph. Because of this structure, it is also known as a hamburger essay, one three one, or a three-tier essay.

The digraph presented in the OP really is a great approach, IMHO:

## Introduction

## Related Work, System Model, Problem Statement

## Your Solution

## Analysis

## Simulation, Experimentation

## Conclusion

... "Elements of the scientific method" https://en.wikipedia.org/wiki/Scientific_method#Elements_of_...

[+]
[-]

Jeff Hawkins Is Finally Ready to Explain His Brain Research

Cortical column: https://en.wikipedia.org/wiki/Cortical_column

> In the neocortex 6 layers can be recognized although many regions lack one or more layers, fewer layers are present in the archipallium and the paleopallium.

What this means in terms of optimal artificial neural network architecture and parameters will be interesting to learn about; in regards to logic, reasoning, and inference.

According to "Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function" https://www.frontiersin.org/articles/10.3389/fncom.2017.0004... , the human brain appears to be [at most] 11-dimensional (11D); in terms of algebraic topology https://en.wikipedia.org/wiki/Algebraic_topology

Relatedly,

"Study shows how memories ripple through the brain" https://www.ninds.nih.gov/News-Events/News-and-Press-Release...

> The [NeuroGrid] team was also surprised to find that the ripples in the association neocortex and hippocampus occurred at the same time, suggesting the two regions were communicating as the rats slept. Because the association neocortex is thought to be a storage location for memories, the researchers theorized that this neural dialogue could help the brain retain information.

Re: Topological graph theory [1], is it possible to embed a graph on a space filling curve [2] (such as a Hilbert R-tree [3])?

[1] https://en.wikipedia.org/wiki/Topological_graph_theory

[2] https://en.wikipedia.org/wiki/Space-filling_curve

[3] https://en.wikipedia.org/wiki/Hilbert_R-tree

[4] https://github.com/bup/bup (git packfiles)

[-]

Interstellar Visitor Found to Be Unlike a Comet or an Asteroid

[+]
[+]
[+]
[+]

> Not if it's something like another civilization's Tesla Roadster.

'Oumuamua is red and headed toward Pegasus (the winged horse) after a very long journey starting longtime in spacetime ago. It is wildly tumbling off-kilter and potentially creating a magnetic field that would be useful for interplanetary spacetravel.

They're probably pointing us to somewhere else from somewhere else.

If this is any indication of the state of another civilization's advanced physics, and it missed us by a wide margin, they're probably laughing at our energy and water markets; and indicating that we should be focused on asteroid impact avoidance (and then we will really laugh about rockets and red electromagnetic kinetic energy machines and asteroid mining). https://en.wikipedia.org/wiki/Asteroid_impact_avoidance

"Amateurs"

[We watch it fly by, heads all turning]

Maybe it would've been better to have put alone starman in the passenger seat or two starpeoples total?

Given the skull shape of October 2015 TB145 [1] (due to return in November 2018), maybe 'Oumuamua [2] is a pathology of Mars and an acknowledgement of our spacefaring intentions? Red, subsurface water, disrupted magnetic field.

[1] https://en.wikipedia.org/wiki/2015_TB145

[2] https://en.wikipedia.org/wiki/%CA%BBOumuamua

In regards to a red, unshielded, earth vehicle floating in solar orbit with a suited anthropomorphic creature whose head is too big for the windshield:

"What happened here?"

"That's not a knife... This is a knife." -- Crocodile Dundee

[-]

Publishing more data behind our reporting

[+]

> Publishing raw data itself is definitely a good start but there also needs to be a push towards a standardized way of sharing data along with it's lineage (dependent sources, experimental design/generation process, metadata, graph relationship of other uses, etc.).

Linked Data based on URIs is reusable. ( https://5stardata.info )

The Schema.org Health and Life Sciences extension is ahead of the game here, IMHO. MedicalObservationalStudy and MedicalTrial are subclasses of https://schema.org/MedicalStudy . {DoubleBlindedTrial, InternationalTrial, MultiCenterTrial, OpenTrial, PlaceboControlledTrial, RandomizedTrial, SingleBlindedTrial, SingleCenterTrial, and TripleBlindedTrial} are subclasses of schema.org/MedicalTrial.

A schema.org/MedicalScholarlyArticle (a subclass of https://schema.org/ScholarlyArticle ) can have a https://schema.org/Dataset. https://schema.org/hasPart is the inverse of https://schema.org/isPartOf .

More structured predicates which indicate the degree to which evidence supports/confirms or disproves current and other hypotheses (according to a particular Person or Persons on a given date and time; given a level of scrutiny of the given information) are needed.

In regards to epistemology, there was some work on Fact Checking ( e.g. https://schema.org/ClaimReview ) in recent times. To quote myself here, from https://news.ycombinator.com/item?id=15528824 :

> In terms of verifying (or validating) subjective opinions, correlational observations, and inferences of causal relations; #LinkedMetaAnalyses of documents (notebooks) containing structured links to their data as premises would be ideal. Unfortunately, PDF is not very helpful in accomplishing that objective (in addition to being a terrible format for review with screen reader and mobile devices): I think HTML with RDFa (and/or CSVW JSONLD) is our best hope of making at least partially automated verification of meta analyses a reality.

"#LinkedReproducibility"; "#LinkedMetaAnalyses", "#StudyGraph"

[-]

CSV 1.1 – CSV Evolved (for Humans)

[+]

CSVW: CSV on the Web https://w3c.github.io/csvw/

"CSV on the Web: A Primer" http://www.w3.org/TR/tabular-data-primer/

"Model for Tabular Data and Metadata on the Web" http://www.w3.org/TR/tabular-data-model/

"Generating JSON from Tabular Data on the Web" (csv2json) http://www.w3.org/TR/csv2json/

"Generating RDF from Tabular Data on the Web" (csv2rdf) http://www.w3.org/TR/csv2rdf/

...

N. Allow authors to (1) specify how many header rows are metadata and (2) what each row is. For example: 7 metadata header rows: {column label, property URI [path], datatype URI, unit URI, accuracy, precision, significant figures}

With URIs, we can merge, join, and concatenate data (when e.g. study control URIs for e.g. single/double/triple blinding/masking indicate that the https://schema.org/Dataset meets meta-analysis inclusion criteria).

"#LinkedReproducibility"; "#LinkedMetaAnalyses"

[-]

Ask HN: Which plants can be planted indoors and easily maintained?

Chlorophytum comosum (spider plants) are good air-filtering houseplants that are also easy to take starts of: https://en.wikipedia.org/wiki/Chlorophytum_comosum

Houseplant: https://en.wikipedia.org/wiki/Houseplant

[-]

The down side to wind power

[+]
[+]

> IMO nuclear is the only realistic alternative to coal to provide reliable, zero-emission "base load" power generation. Wind and solar could make sense in some use cases but not in general.

How much heat energy does a reactor with n meters of concrete around it, located on a water supply in order to use water in an open closed loop, protected with national security resources, waste into the environment?

I'd be interested to see which power sources the authors of this study would choose as a control for these just sensational stats.

From https://news.ycombinator.com/item?id=17806589 :

> Canada (2030), France (2021), and the UK (2025) are all working to entirely phase out coal-fired power plants for very good reasons (such as neonatal health).

Would you burn a charcoal grill in an enclosed space like a garage? No.

Thermodynamics of Computation Wiki

"Quantum knowledge cools computers: New understanding of entropy" (2011) https://www.sciencedaily.com/releases/2011/06/110601134300.h...

> The new study revisits Landauer's principle for cases when the values of the bits to be deleted may be known. When the memory content is known, it should be possible to delete the bits in such a manner that it is theoretically possible to re-create them. It has previously been shown that such reversible deletion would generate no heat. In the new paper, the researchers go a step further. They show that when the bits to be deleted are quantum-mechanically entangled with the state of an observer, then the observer could even withdraw heat from the system while deleting the bits. Entanglement links the observer's state to that of the computer in such a way that they know more about the memory than is possible in classical physics.

"The thermodynamic meaning of negative entropy" (2011) https://www.nature.com/articles/nature10123

Landauer's principle: https://en.wikipedia.org/wiki/Landauer%27s_principle

"Thin film converts heat from electronics into energy" (2018) http://news.berkeley.edu/2018/04/16/thin-film-converts-heat-...

> This study reports new records for pyroelectric energy conversion energy density (1.06 Joules per cubic centimeter), power density (526 Watts per cubic centimeter) and efficiency (19 percent of Carnot efficiency, which is the standard unit of measurement for the efficiency of a heat engine).

"Pyroelectric energy conversion with large energy and power density in relaxor ferroelectric thin films" (2018) https://www.nature.com/articles/s41563-018-0059-8

Carnot heat engine > Carnot cycle, Carnot's theorem, "Real heat engines": https://en.wikipedia.org/wiki/Carnot_heat_engine

Carnot's theorem > Applicability to fuel cells and batteries: https://en.wikipedia.org/wiki/Carnot%27s_theorem_(thermodyna...

> Since fuel cells and batteries can generate useful power when all components of the system are at the same temperature [...], they are clearly not limited by Carnot's theorem, which states that no power can be generated when [...]. This is because Carnot's theorem applies to engines converting thermal energy to work, whereas fuel cells and batteries instead convert chemical energy to work.[6] Nevertheless, the second law of thermodynamics still provides restrictions on fuel cell and battery energy conversion

[+]

Is there enough heat energy from a datacenter to -- rather than heating oceans (which can result in tropical storms) -- turn a turbine (to convert heat energy back into electrical energy)?

Is there a statistic which captures the amount of heat energy discharged into ocean/river/lake water? "100% clean energy with PPAs (Power Purchase Agreements)" while bleeding energy into the oceans isn't quite representative of the total system.

"How to Reuse Waste Heat from Data Centers Intelligently" (2016) https://www.datacenterknowledge.com/archives/2016/05/10/how-...

> There are two big issues with data center waste heat reuse: the relatively low temperatures involved and the difficulty of transporting heat. Many of the reuse applications to date have used the low-grade server exhaust heat in an application physically adjacent to the data center, such as a greenhouse or swimming pool in the building next door. This is reasonable given the relatively low temperatures of data center return air, usually between 28° and 35°C (80-95°F), and the difficulty in moving heat around. Moving heat energy frequently requires insulated ducting or plumbing instead of cheap, convenient electrical cables. Trenching and installation to run a hot water pipe from a data center to a heat user may cost as much as $600 per linear foot. Just the piping to share heat with a facility one-quarter mile away might add $750,000 or more to a data center construction project. There’s currently not much that can be done to reduce this cost.

> To address the low-temperature issue, some data center operators have started using heat pumps to increase the temperature of waste heat, making the thermal energy much more valuable, and marketable. Waste heat coming out of heat pumps at temperatures in the range of 55° to 70°C (130-160°F) can be transferred to a liquid medium for easier transport and can be used in district heating, commercial laundry, industrial process heat, and many more. There are even High Temperature (HT) and Very High Temperature (VHT) heat pumps capable of moving low-grade data center heat up to 140°C.

Heat Pump: https://en.wikipedia.org/wiki/Heat_pump

"Data Centers That Recycle Waste Heat" https://www.datacenterknowledge.com/data-centers-that-recycl...

[-]

Why Do Computers Use So Much Energy?

> Also, to foster research on this topic we have built a wiki, combining lists of papers, websites, events pages, etc. We highly encourage people to visit it, sign up, and start improving it; the more scientists get involved, from the more fields, the better!

Thermodynamics of Computation Wiki https://centre.santafe.edu/thermocomp/Santa_Fe_Institute_Col...

HN: https://news.ycombinator.com/item?id=18146854

[-]

Justice Department Sues to Stop California Net Neutrality Law

[+]
[+]

Expansion of federal jurisdiction under the Commerce Clause is an egregious violation of Constitutional law.

Does the federal government have the enumerated right under the Commerce Clause to, for example, ban football for anyone that doesn't have a disability? No!

Was the Commerce Clause sufficient authorization for Federal prohibition of alcohol? No! An Amendment to the Constitution was necessary. And, Federal Alcohol and the unequal necessary State Alcohol prohibitions miserably failed to achieve the intended outcomes.

Where is the limit? How can they claim to support a states' rights, limited government position while expanding jurisdiction under the Interstate Commerce Clause? "Substantially affecting" interstate commerce is a very slippery slope.

Furthermore, de-classification from Title II did effectively - as the current administration's FCC very clearly argued (in favor of special interests over those of the majority) - relieve the FCC of authority to regulate ISPs: they claimed that it's FTC's job and now they're claiming it's their job.

Without Title II classification, FCC has no authority to preempt state net neutrality regulation. California and Washington have the right to regulate ISPs within their respective states.

Outrageous!

Limited government: https://en.wikipedia.org/wiki/Limited_government

States' rights: https://en.wikipedia.org/wiki/States%27_rights

[Interstate] Commerce Clause: https://en.wikipedia.org/wiki/Commerce_Clause

Net neutrality in the United States > Repeal of net neutrality policy: https://en.m.wikipedia.org/wiki/Net_neutrality_in_the_United...

[+]
[+]

To summarize the points made in [1]: products can be sold across state lines, internet service sold in one state cannot be sold across state lines.

[1] https://news.ycombinator.com/item?id=18111651

In my opinion, the court has significantly erred in redefining interstate commerce to include (1) intrastate-only-commerce; and (2) non-commerce (i.e. locally grown and unsold wheat)

Furthermore - and this is a bit off topic - unalienable natural rights (Equality, Life, Liberty, and pursuit of Happiness) are of higher precedence. I mention this because this is yet another case where the court will be interpreting the boundary between State and Federal rights; and it's very clear that the founders intended for the powers of the federal government to be limited -- certainly not something that the Commerce Clause should be interpreted to supersede.

What penalties and civil fines are appropriate for States or executive branch departments that violate the Constitution; for failure to uphold Oaths to uphold the Constitution?

[+]
[-]

White House Drafts Order to Probe Google, Facebook Practices

[+]
[+]
[+]
[+]

> they were able to grow to the size they have become because they are exempted from liable laws under safe harbor

This was not a selective protection. When the government grants limited resources like electromagnetic spectrum and right of way, they're not directly making a monopoly, but the FCC does then claim right to regulate speech.

In the interest of fairness, the FCC classed telecommunication service providers as common carriers; thus authorizing FCC to pass net neutrality protections which require equal prioritization of internet traffic. (No blocking, No throttling, No paid prioritization). The current administration doesn't feel that that's fair, and so they've moved to dismantle said "burdensome regulations".

The current administration is now apparently attempting to argue that information service providers - which are all equally granted safe harbor and obligated to comply with DMCA - have no right to take down abuse and harassment because anti-trust monopoly therefore Freedom of Speech doesn't apply to these corporation persons.

Selective bias, indeed! Broadcast TV and Radio are subject to different rules than Cable (non-broadcast) TV.

Other regimes have attempted to argue that the government has the right to dictate the media as well.

Taking down abuse and harassment is necessary and well within the rights of a person and a corporation in the United States. Taking down certain content is now legally required within 24 hours of notice from the government in the EU.

Where is the line between a media conglomerate that produces news entertainment and an information service provider? If there is none, and the government has the right to regulate "equal time" on non-granted-spectrum media outlets, future administrations could force ConservativeNewsOutletZ and LiberalNewsOutletZ to carry specific non-emergency content, to host abusive and offense rhetoric, and to be sued for being forced to do so because no safe harbor.

Can anyone find the story of how the GOP strongarmed and intimidated Facebook into "equal time" (and then we were all shoved full of apparently Russian conservative "fake news" propaganda) before the most recent election where the GOP won older radio, TV, and print voters and young people didn't vote because it appeared to be unnecessary?

Meanwhile, the current administration rolled back the "burdensome regulation" that was to prevent ISPs from selling complete internet usage history; regardless of age.

Maybe there's an exercise that would be helpful for understanding the "corporate media filter" and the "social media filter"?

You, having no money -- while watching corporate profits soar and income inequality grow to unprecedented heights -- will choose to take a job that requires you to judge whether thousands of reported pieces of content a day are abusive, harassing, making specific threats, inciting specific destructive acts, recruiting for hate groups, depicting abuse; or just good 'ol political disagreement over issues, values, and the appropriate role of the punishing and/or nurturing state. You will do this for weeks or months, because that's your best option, because nobody else is standing in the mirror behind these people who haven't learned to respectfully disagree over facts and data (evidence).

Next, you will plan segments of content time interspersed with ads paid for by people who are trying to sell their products, grow their businesses, and reach people. You will use a limited amount of our limited electromagnetic spectrum which the government has sold your corporate overlords for a limited period of time, contingent upon your adherence to specific and subjective standards of decency as codified in the stated regulations.

In both cases, your objective is to maximize profit for shareholders.

Your target audiences may vary from undefined (everyone watching), to people who only want to review fun things that they agree with in their safe little microcosm of the world, to people who know how to find statistics like corporate profits, personal savings rate, infant morality, healthcare costs per capita, and other Indicators identified as relevant to the Targets and Goals found in the UN Sustainable Development Goals (Global Goals Indicators).

Do you control what the audience shares?

[-]

Ask HN: Books about applying the open source model to society

I've been thinking for some time now that as productivity keeps growing, not all people will need to work any more. Society will eventually start to resemble an open source project where a few core contributors do the real work (and get to decide the direction), some others help around, and the majority of people just benefit without having to do anything. I'm wondering if any books have been written to explore this concept further?

> I've been thinking for some time now that as productivity keeps growing, not all people will need to work any more.

How much energy do autotrophs and heterotrophs need to thrive?

"But then we'll be rewarding laziness!"

Some people do enjoy the work they've chosen to do. We enjoy the benefits of upward mobility here in the US; the land of opportunity.

Why would I fully retire at 65 (especially if lifespan extension really is in reach)?

> Society will eventually start to resemble an open source project where a few core contributors do the real work (and get to decide the direction), some others help around, and the majority of people just benefit without having to do anything.

Open-source governance https://en.wikipedia.org/wiki/Open-source_governance

Free-rider problem https://en.wikipedia.org/wiki/Free-rider_problem

As we continue to reward work, the people who are investing in the means of production (energy, labor, automation, raw materials) and science (research and development; education) continue to amass wealth and influence.

This concentration of wealth -- wealth inequality -- has historically presaged and portended unrest.

How contributions to open source projects are reinforced, what motivates people who choose to contribute (altruism, enlightened self interest, compassion, acceptance,), and what makes a competitive and thus sustainable open source project is an interesting study.

... Business models for open-source software: https://en.wikipedia.org/wiki/Business_models_for_open-sourc...

... Political Science: https://en.wikipedia.org/wiki/Political_science

... National currencies are valued in FOREX markets: https://en.wikipedia.org/wiki/Foreign_exchange_market

> I'm wondering if any books have been written to explore this concept further?

"The Singularity is Near: When Humans Transcend Biology" (2005) contains a number of extrapolated predictions; chief among these is that there will continue to be exponential growth in technological change https://en.wikipedia.org/wiki/The_Singularity_Is_Near

... Until we reach limits; e.g. the carrying capacity of our ecosystem, the edge of the universe.

"The Limits to Growth" (1972, 2004) https://en.wikipedia.org/wiki/The_Limits_to_Growth

"Leverage Points: Places to Intervene in a System" (2010) https://news.ycombinator.com/item?id=17781927

Who owns what and who 'gets to' just chill while the solar robots brush their teeth? Heady questions. "Tired yet?"

The Aragon Project has a really interesting take on open source governance:

""" IMAGINE A NATION WITHOUT LAND AND BORDERS

A digital jurisdiction

> Aragon Network will be the first community governed decentralized organization whose goal is to act as a digital jurisdiction, an online decentralized court system that isn’t bound by traditional artificial barriers such as national jurisdictions or the borders of a single country.

Aragon organizations can be upgraded seamlessly using our aragonOS architecture. They can solve disputes between two parties by using the decentralized court system, a digital jurisdiction that operates only online and utilizes your peers to resolve issues.

The Aragon Network Token, ANT, puts the power into the hands of the people participating in the operation of the Network. Every single aspect of the Network will be governed by those willing to make an effort for a better future. """

https://wiki.aragon.org

[-]

Today, Europe Lost The Internet. Now, We Fight Back

Here's a quote from this excellent article:

> An error rate of even one percent will still mean tens of millions of acts of arbitrary censorship, every day.

And a redundant -- positively defiant -- link and page title:

"Today, Europe Lost The Internet. Now, We Fight Back." https://www.eff.org/deeplinks/2018/09/today-europe-lost-inte...

Firms with 50 or less employees should stay that small, really.

VPN providers in North and South America FTW.

[+]
[+]
[+]
[+]

Technically, the phrase "Useful Arts and Sciences" in the Copyright Clause of the US Constitution applies to just that; the definitions of which have coincidentally changed over the years.

The harms to Freedom of Speech -- i.e. impossible 99% accuracy in content filtering still results in far too much censorship -- so significantly outweigh the benefits for a limited number of special interests intending to thwart inferior American information services which also currently host "art" and content pertaining to the "useful arts"; that it's hard to believe this new policy will have it's intended effects.

Haven't there been multiple studies which show that free marketing from e.g. content piracy -- people who experience and recommended said goods at $0 -- is actually a net positive for the large corporate entertainment industry? That, unimpeded, content spreads like the common cold through word of mouth; resulting in greater number of artful impressions.

How can they not anticipate de-listing of EU content from news and academic article aggregators as an outcome of these new policies? (Resulting in even greater outsized impact on one possible front page that consumers can choose to consume)

For countries in the EU with less than 300 million voters, if you want:

- time for your headline: $

- time for your snippet: $$

- time for your og:description: $$

- free video hosting: $$$

- video revenue: $$$$

- < 30% American content: $$$$$

Pay your bill.

And what of academic article aggregators? Can they still index schema:ScholarlyArticle titles and provide a value-added information service for science?

[-]

Consumer science (a.k.a. home economics) as a college major

> That's why we need to bring back the old home economics class. Call it "Skills for Life" and make it mandatory in high schools. Teach basic economics along with budgeting, comparison shopping, basic cooking skills and time management.

Some Jupyter notebooks for these topics that work with https://mybinder.org could be super helpful. A self-paced edX course could also be a great intro to teaching oneself though online learning.

* Personal Finance (budgets, interest, growth, inflation, retirement)

* Food Science (nutrition, meal planning for n people, food prep safety, how long certain things can safely be left out on the counter)

* Productivity Skills (GTD, context switching overhead, calendar, email labels, memo app / shared task lists)

There were FACS (Family and Consumer Studies/Sciences) courses in our middle and high school curricula. Nutrition, cooking, sewing; family planning, carry a digital baby for awhile

Home economics https://en.wikipedia.org/wiki/Home_economics

* Family planning

https://en.wikipedia.org/wiki/Family_planning

> * Personal Finance (budgets, interest, growth, inflation, retirement)

Personal Finance https://en.wikipedia.org/wiki/Personal_finance

Khan Academy > College, careers, and more > Personal finance https://www.khanacademy.org/college-careers-more/personal-fi...

"CS 007: Personal Finance For Engineers" https://cs007.blog

https://reddit.com/r/personalfinance/wiki

> * Food Science (nutrition, meal planning for n people, food prep safety, how long certain things can safely be left out on the counter)

Food Science https://en.wikipedia.org/wiki/Food_science

Dietary management https://en.wikipedia.org/wiki/Dietary_management

Nutrition Education: https://en.wikipedia.org/wiki/Nutrition_Education

MyPlate https://en.wikipedia.org/wiki/MyPlate

Healthy Eating Plate https://www.hsph.harvard.edu/nutritionsource/healthy-eating-...

How to make salads, smoothies, sandwiches

How to compost and avoid unnecessary packaging

* School, College, Testing, "How Children Learn"

GED, SAT, ACT, MCAT, LSAT, GRE, GMAT, ASVAB

Defending a Thesis, Bar Exam, Boards

Khan Academy > College, careers, and more https://www.khanacademy.org/college-careers-more

Educational Testing https://wrdrd.github.io/docs/consulting/educational-testing

529 Plans (can be used for qualifying educational expenses for any person) https://en.wikipedia.org/wiki/529_plan

Middle School "Glimpse" project: Past, Present, Future. Present, Future: plan your 4-year highschool course plan, pick 3 careers, pick 3 colleges (and how much they cost)

High school literature: write a narrative essay for college admissions

* Health and Medicine

How to add emergency contact and health information to your phone, carseat (ICE: In Case of Emergency)

How to get health insurance ( https://healthcare.gov/ )

"What's your blood type?" (?!)

Khan Academy > Science > Health and Medicine https://www.khanacademy.org/science/health-and-medicine

[-]

Facebook vows to run on 100 percent renewable energy by 2020

Is there a list of 100% renewable energy companies?

OTOH, Apple and Google are 100% renewable -- accounting for Power Purchase Agreements -- today.

{Company, Usage, PPA offsets, Target Year}

Are there sustainability reporting standards which require these facts?

[-]

Miami Will Be Underwater Soon. Its Drinking Water Could Go First

Now, now, let's focus on the positives here:

- more pollution from shipping routes through the Arctic circle (and yucky-looking icebergs that tourists don't like)

- less beachfront property

- more desalinatable water

- hotter heat

- more revulsive detestable significant others (displaced global unrest)

- costs of responding to natural disasters occurring with greater frequency due to elevated ocean temperatures

- less parking spaces (!)

What are the other costs and benefits here?

I've received a number of downvotes for this comment. I think it's misunderstood, and that's my fault: I should have included [sarcasm] around the whole comment [/sarcasm].

I've written about our need to address climate change here in past comments. I think the administration's climate change denials (see: "climate change politifact') and regulatory rollbacks are beyond despicable: they're sabotaging the United States by allowing more toxic chemicals into the environment that we all share, and allowing more sites that must be protected with tax dollars that aren't there because these industries pay far less than benchmarks in terms of effective tax rate. We know that vehicle emissions, mercury, and coal ash are toxic: why would we allow people to violate the rights of others in that way?

A person could voluntarily consume said toxic byproducts and not have violated their own rights or the rights of others, you understand. There's no medical value and low potential for abuse, so we just sit idly by while they're violating the rights of other people by dumping toxic chemicals into the environment that are both poisonous and strongly linked to climate change.

What would help us care about this? A sarcastic list of additional reasons that we should care? No! Miami underwater during tourist season is enough! I've had enough!

So, my mistake here - my downvote-earning mistake - was dropping my generally helpful, hopeful tone for cynicism and sarcasm that wasn't motivating enough.

We need people to regulate pollution in order to prevent further costs of climate change. Water in the streets holds up commerce, travel, hampers national security, and destroys the road.

We must stop rewarding pollution if we want it - and definitely resultant climate change - to stop. What motivates other people to care?

Scientists Warn the UN of Capitalism's Imminent Demise

The actual document title: "Global Sustainable Development Report 2019 drafted by the Group of independent scientists: Invited background document on economic transformation, to chapter: Transformation: The Economy" (2018) https://bios.fi/bios-governance_of_economic_transition.pdf [PDF]

Why I distrust command economies (beyond just because of our experiences with violent fascism and defense overspending and the subsequent failures of various communist regimes):

We have elections today. We don't choose to elect people that regard the environment (our air, water, land, and other natural resources) as our most important focus. A command economy driven by these folks for longer than a term limit would be even more disastrous.

The market does not solve for 'externalities': things that aren't costed in. We must have regulation to counteract the blind optimization for profit (and efficiency) which capitalism rewards most.

Environmental regulation is currently insufficient; worldwide. That is the consensus from the Paris Agreement which 195 countries signed in 2015. https://en.wikipedia.org/wiki/Paris_Agreement

Maybe incentives?

We could sell tokens for how much pollutants we're allowed to f### everyone else over with and penalize exceeding the amount we've purchased. That would incentivize firms to pollute less so that they can save money by having to buy fewer tokens. (Europe does this already; and it's still not going to save the planet from industrial production externalities)

So, while I'm wary of any suggestion that a command economy would somehow bring forth talent in governance, I look to this article for actionable suggestions that penalize and/or incentivize sustainable business and living practices.

Sustainable reporting really is a must: how can I design an investment portfolio that excludes reckless, irresponsible, indifferent, and careless investments and highly values sustainability?

No one likes to be driven by harsh penalties; everyone likes to be rewarded (even with carrots as incentives).

Markets do not solve for long term outcomes. Case in point: the market has not chosen the most energy efficient cryptocurrencies. Is this an information asymmetry issue: people just don't know, or just don't care because the incentives are so alluring, the brand is so strong, or the perceived security assurances of the network outweighs the energy use (and environmental impact) in comparison to dry cleaning and fossil fuel transport.

How would a command economy respond to this? It really is denial and delusion to think that the market will cast aside less energy efficient solutions in order to save the environment all on its own.

So, what do we do?

Do we incentivize getting inefficient vehicles off of the road and into a recycling plant where they belong?

Do we shut down major sources of pollution (coal plants, vehicle emissions)?

Do we create tokens to account for pollution allowances (for carbon and other toxic f###ing chemicals)?

Do we cut irrational subsidies for industries that don't pay their taxes (even when they make money); so that we're aware of the actual costs of our behavior?

Do we grow hemp to absorb carbon, clean up the soil, replace emissions, and store energy?

Who's in the mood to dom these greedy shortsighted idiots into saving themselves and preventing the violation of our right to health (life)? No, you can't because you're busy violating your own rights and finding drugs/druggies and that's not allowed? Is that a lifetime position?

"Go burn a charcoal grill and your gas vehicle in your closed garage for awhile and come talk to me." That's really what we're dealing with here.

Anyways, this paper raises some good points; although I have my doubts about command economies.

[strikethrough] You can't do that to yourself. [/strikethrough] You can't do that to others (even if you pay for their healthcare afterwards).

Where's Captain Planet when you need 'em, anyways?

[+]
[-]

Firefox Nightly Secure DNS Experimental Results

> The experiment generated over a billion DoH transactions and is now closed. You can continue to manually enable DoH on your copy of Firefox Nightly if you like.

...

> Using HTTPS with a cloud service provider had only a minor performance impact on the majority of non-cached DNS queries as compared to traditional DNS. Most queries were around 6 milliseconds slower, which is an acceptable cost for the benefits of securing the data. However, the slowest DNS transactions performed much better with the new DoH based system than the traditional one – sometimes hundreds of milliseconds better.

[-]

Long-sought decay of Higgs boson observed at CERN

[+]
[+]

> It is full of unexplained hadcoded parameters, indeed, which need an explanation from outside of the SM.

https://en.wikipedia.org/wiki/Magic_number_(programming)#Unn...

> The term magic number or magic constant refers to the anti-pattern of using numbers directly in source code

[+]
[+]
[+]
[+]
[-]

Building a Model for Retirement Savings in Python

re: pulling historical data with pandas-datareader, backtesting, algorithmic trading: https://www.reddit.com/r/Python/comments/7zxptg/pulling_stoc...

re: historical returns

- [The article uses a constant 7% annual return rate]

- "The current average annual return from 1923 (the year of the S&P’s inception) through 2016 is 12.25%." https://www.daveramsey.com/blog/the-12-reality (but that doesn't account for inflation)

- https://www.quantopian.com/posts/56b62019a4a36a79da000059 (300%+ over n years (from a down market))

Is there a Jupyter notebook with this code (with a requirements.txt for https://mybinder.org (repo2docker))?

[-]

New E.P.A. Rollback of Coal Pollution Regulations Takes a Major Step Forward

Would you move your family downwind from a coal plant? Why or why not?

Coal ash pollutes air, water, rain (acid rain), crops (our food), and soil. Which rights of victims does coal pollution infringe? Who is liable for the health effects?

Canada (2030), France (2021), and the UK (2025) are all working to entirely phase out coal-fired power plants for very good reasons (such as neonatal health).

~"They're just picking on coal": No, we're choosing renewables that are lower cost AND don't make workers and citizens sick.

If you can mine for coal, you can set up solar panels and wind turbines.

If you can run a coal mine; you can buy some cheap land, put up solar panels and wind turbines, and connect it to the grid.

[-]

Um – Create your own man pages so you can remember how to do stuff

[+]

If you write these in .rst, you can generate actual manpages with Sphinx: http://www.sphinx-doc.org/en/master/usage/configuration.html...

sphinx.builders.manpage: http://www.sphinx-doc.org/en/master/_modules/sphinx/builders...

[+]
[-]

Leverage Points: Places to Intervene in a System

[+]
[+]

"The Limits to Growth" (1972) https://en.wikipedia.org/wiki/The_Limits_to_Growth

"Thinking in Systems: a Primer" (2008) https://g.co/kgs/B71ebC

Glossary of systems theory https://en.wikipedia.org/wiki/Glossary_of_systems_theory

Systems Theory https://en.wikipedia.org/wiki/Systems_theory

...

Computational Thinking https://en.wikipedia.org/wiki/Computational_thinking

Which of the #GlobalGoals (UN Sustainable Development Goals) Targets and Indicators are primary leverage points for ensuring - if not growth - prosperity? https://en.wikipedia.org/wiki/Sustainable_Development_Goals

[-]

SQLite Release 3.25.0 adds support for window functions

[+]
[+]

Ibis uses windowing functions for aggregations if the database supports them. IDK when support for the new SQLite support will be implemented? http://docs.ibis-project.org/sql.html#window-functions

[EDIT]

I created an issue for this here: https://github.com/ibis-project/ibis/issues/1597

[-]

Update on the Distrust of Symantec TLS Certificates

Is the certifi bundle (2018.8.13) on PyPI also updated? https://pypi.org/project/certifi/

https://github.com/certifi/certifi.io/issues/18

> Are these still in the bundle?

> Should projects like requests which depend on certifi also implement this logic?

[-]

The Transport Layer Security (TLS) Protocol Version 1.3

Is PKI still an optional feature of TLS? Can one still use self-signed x.509 certificates and have key-signing parties?

[-]

Academic Torrents – Making 27TB of research data available

[+]

> This stuff should be basic literacy for everyone.

Arguably, one compromised PKI x.509 CA jeopardizes all SSL/TLS channel sec if there's no certificate pinning and an alternate channel for distributing signed cert fingerprints (cryptographically signed hashes).

We could teach blockchain and cryptocurrency principles: private/secret key, public key, hash verification; there there's money on the table.

GPG presumes secure key distribution (`gpg --verify .asc`).

TUF is designed to survive certain role key compromises. https://theupdateframework.github.io

[-]

1/0 = 0

1/0 = 1(±∞)

https://twitter.com/westurner/status/960508624849244160

> How many times does zero go into any number? Infinity. [...]

> How many times does zero go into zero? infinity^2?

[+]

Extrapolate.

What value does 1/x approach?

What about 2/x?

And then, what about ∞/x? What value would we expect that to approach? ∞(±∞)

[+]
[-]

Power Worth Less Than Zero Spreads as Green Energy Floods the Grid

[+]
[+]
[+]
[+]

Rational cryptocurrency mining firms can use the excess (unstorable) energy by converting it back to money (while the sun shines and the wind blows).

Money > Energy > Money

> Someone is having to build a lot of highly wasteful, redundant infrastructure.

We're nowhere near having the energy infrastructure necessary to support everyone having an electric vehicle yet.

Energy storage is key to maximizing returns from renewables and minimizing irreversible environmental damage.

[-]

Kernels, a free hosted Jupyter notebook environment with GPUs

[+]
[+]
[+]

Here are the Kaggle Kernels Dockerfiles:

- Python: https://github.com/Kaggle/docker-python/blob/master/Dockerfi...

- R: https://github.com/Kaggle/docker-rstats/blob/master/Dockerfi...

https://mybinder.org builds containers (and launches free cloud instances) on demand with repo2docker from a (commit hash, branch, or tag) repo URL: https://repo2docker.readthedocs.io/en/latest/config_files.ht...

[+]
[-]

Solar and wind are coming. And the power sector isn’t ready

I don't know that fatalism and hopelessness are motivating for decision makers (who are seeking greater margins regardless of policy and lobbies).

Is our transformation to 100% clean energy ASAP a certain eventuality? On a long enough timescale, it would be irrational for utilities to not choose both lower cost and more sustainable environmental impact ('price-rational', 'environment-rational').

We should expect storage and generation costs to continue to fall as we realize even just the current pipeline of capitalizable [storage] research.

Solar energy is free.

[-]

Solar Just Hit a Record Low Price in the U.S

[+]

>> Relevant bits:

>> “On their face, they’re less than a third the price of building a new coal or natural gas power plant,” Ramez Naam, an energy expert and lecturer at Singularity University, told Earther in an email. “In fact, building these plants is cheaper than just operating an existing coal or natural gas plant.”

>> There’s a 30 percent federal investment tax credit for solar projects that helps drive down the cost of this and other solar projects. But Naam said even if you take away that credit, “these bids, un-subsidized, are still cheaper than any new coal or gas plants, and possibly cheaper than operating existing plants.”

I'm assuming that's without factoring in the health cost externalities.

[+]
[-]

Tim Berners-Lee is working a platform designed to re-decentralize the web

[+]
[+]

Spec: https://github.com/solid/solid-spec

Source: https://github.com/solid/solid

...

From https://news.ycombinator.com/item?id=16615679 ( https://westurner.github.io/hnlog/#comment-16615679 )

> ActivityPub (and OStatus, and ActivityStreams/Salmon, and OpenSocial) are all great specs and great ideas. Hosting and moderation cost real money (which spammers/scammers are wasting).

> Know what's also great? Learning. For learning, we have the xAPI/TinCan spec and also schema.org/Action.

Mastodon has now supplanted GNU StatusNet.

[-]

More States Opting to 'Robo-Grade' Student Essays by Computer

edX can automate short essay grading with edx/edx-ora2 "Open Response Assessment Suite" [1] and edx/ease "Enhanced AI scoring engine" [2].

1: https://github.com/edx/edx-ora2 2: https://github.com/edx/ease

... I believe there's also a tool for peer feedback.

[+]
[-]

Ask HN: Looking for a simple solution for building an online course

I want to build an online course on graph algorithms for my university. I've tried to find a solution which would let submit, execute and test student's code (implement an online judge), but have had no success. There are a lot of complex LMS and none of them seem to have this feature as a basic functionality.

Are there any good out-of-box solutions? I'm sure I can build a course using Moodle or another popular LMS with some plugin, but I don't want to spend my time customizing things.

I'm interested both in platforms and self-hosted solutions. Thanks!

[+]

nbgrader is a "A system for assigning and grading Jupyter notebooks." https://github.com/jupyter/nbgrader

jupyter-edx-grader-xblock https://github.com/ibleducation/jupyter-edx-grader-xblock

> Auto-grade a student assignment created as a Jupyter notebook, using the nbgrader Jupyter extension, and write the score in the Open edX gradebook

... networkx is a graph library written in Python which has pretty good docs: https://networkx.github.io/documentation/stable/reference/

There are a few books which feature networkx.

[-]

New research a ‘breakthrough for large-scale discrete optimization’

[+]

"An Exponential Speedup in Parallel Running Time for Submodular Maximization without Loss in Approximation" https://www.arxiv-vanity.com/papers/1804.06355/

The ACM STOC 2018 conference links to "The Adaptive Complexity of Maximizing a Submodular Function" http://dl.acm.org/authorize?N651970 https://scholar.harvard.edu/files/ericbalkanski/files/the-ad...

A DOI URI would be great, thanks.

[-]

Wind, solar farms produce 10% of US power in the first four months of 2018

[+]

> This is counting all output by wind and solar regardless if it is needed and usable when the power is being produced. This is quite important because wind and solar are not on-demand sources of power.

I think you have that backwards: in the US, we lack the ability to scale down coal and nuclear plants. Solar and Wind are generally the first to get pulled offline when generated capacity exceeds demand and storage.

TIL this is called "curtailment" and it's an argument that utilities have used to justify not spending on renewables that are saving the environment from global warming (which is going to require more electricity for air conditioning).

Solar energy production peaks around noon. Demand for electricity peaks in the evening. We need storage (batteries with supercapacitors out front) in order to store the difference between peak generation and peak use. Because they're unable to store this extra energy, they temporarily shut down solar and wind and leave the polluting plants online.

Consumers aren't exposed to daily price fluctuations: they get a flat rate that makes it easy to check their bill; so there's no price incentive to e.g. charge an EV at midday when energy is cheapest.

The 'Duck curve' shows this relation between peak supply and demand in electricity markets: https://en.wikipedia.org/wiki/Duck_curve

Developing energy storage capabilities (through infrastructure and open access basic research that can be capitalized by all) is likely the best solution. According to a fairly recent report, we could go 100% renewable with the energy storage tech that exists today.

But there's no money for it. There's money for subsidizing oil production (regardless of harms (!)), but not so much for wind and solar. There's money for responding to natural disasters caused by global warming, but not so much for non-carbon-based energy sources that don't cause global warming. A film called "The Burden: Fossil Fuel, the Military, and National Security" quotes the actual unsubsidized price of a gallon of gasoline.

Wouldn't it be great if there was some kind of computer workload that could be run whenever energy is cheapest ( 'energy spot instances') so that we can accelerate our migration to renewable energy sources that are saving the environment for future generations? If there were people who had strong incentives to create demand for power-efficient chips and inexpensive clean energy.

Where would be if we had continued with Jimmy Carter's solar panels on the roof of the White House (instead of constant war and meddling with competing oil production regions of the world)?

It's good to see wind and solar growing this fast this year. A chart with cost per kWhr or MWhr would be enlightening.

[-]

FDA approves first marijuana-derived drug and it may spark DEA rescheduling

[+]
[+]
[+]
[+]
[+]
[+]

Again, I ask you to explain how the current law grants equal rights.

https://news.ycombinator.com/item?id=17401906

> We tend to have issues with Equal rights/protections: slavery, voting rights, [school] segregation. Please help us understand how to do this Equally:

>> Furthermore, (1) write a function to determine whether a given Person has a (natural inalienable) right: what information may you require? (2) write a function to determine whether any two Persons have equal rights.

Abolitionists faced similar criticism from on high.

[-]

States Can Require Internet Tax Collection, Supreme Court Rules

[+]
[+]
[+]
[+]
[+]
[+]
[+]

This would reduce costs of tax collection for all parties.

What is the most convenient format for this layered geographic data? Are the tax district boundary polygons already otherwise available as open data? What do localities call these? Sales tax tables, sales tax database, machine-readable flat files in an open format with a common schema?

How much tax revenue should it cost to provide such a service on a national level?

States, Counties, Cities, 'Tax Zones'(?) could be required to host tax.state.us.gov or similar with something like Project Open Data JSONLD /data.json that could be aggregated and shared by a server with a URL registry, a task queue service, and a CDN service.

While the Bitcoin tax payments bill passed the Senate and House in Arizona, it was vetoed in May 2018. Seminole County in Florida now allows tax payment with crytocurrencies such as Bitcoin:

https://cointelegraph.com/news/us-seminole-county-florida-to...

> According to a press release, the county will begin accepting Bitcoin (BTC) and Bitcoin Cash (BCH) to pay for services, including property taxes, driver license and ID card fees, as well as tags and titles. The Seminole County Tax Collector will reportedly employ blockchain payments company BitPay, which will allow the county to receive settlement the next business day directly to its bank account in US dollars.

This could also help reduce the costs of tax collection and possibly increase the likelihood of compliance with the forthcoming tax bills!

[+]
[-]

Ask HN: Do you consider yourself to be a good programmer?

if not, why? how do you validate your achievements?

> For identifying strengths and weaknesses: "Programmer Competency Matrix":

> - http://sijinjoseph.com/programmer-competency-matrix/

> - https://competency-checklist.appspot.com/

> - https://github.com/hltbra/programmer-competency-checklist

[+]

Automated testing is not a choice in many industries.

If you're not familiar with TDD, you haven't yet achieved that level of mastery.

There's a productivity boost to being able to change quickly without breaking things.

Is all unit/functional/integration testing and continuous integrating TDD? Is it still TDD if you write the tests after you write the function (and before you commit/merge)?

I think this competency matrix is a helpful resource. And I think that learning TDD is an important thing for a good programmer.

[+]

This is all unfounded conjecture: it seems easier to remember which parameter combinations may exist and need to be tested when writing the function; so "let's all write tests later" becomes a black box exercise which is indeed a helpful perspective for review, but isn't the most effective use of resources.

[+]

A good programmer finds common attributes and behaviors and organizes them into namespaced structs/arrays/objects with functions/methods and tests. Abstractly, which terms should we use to describe hierarchical clusters of things with information and behaviors if not those from a known software development or project management methodology?

And a good programmer asks why people might have spent so much time formalizing project development methodologies. "What sorts of product (team) failures are we dealing with here?" is an expensive question to answer as a team.

By applying tenets of Named agile software development methodologies, teams and managers can feel like they're discussing past and current experiences/successes/failures with comparable implementations of approaches that were or are appropriate for different contexts.

To argue the other side, just cherry picking from different methodologies is creating a new methodology, which requires time to justify basically what we already have terms for on the wall over here.

"We just pop tasks off the queue however" is really convenient for devs but can be kept cohesive by defining sensible queues: [kanban] board columns can indicate task/issue/card states and primacy, [sprint] milestone planning meetings can yield complexity 'points' estimates for completable tasks and their subtasks. With team velocity (points/time), a manager can try to appropriately schedule optimal paths of tasks (that meet the SMART criteria (specific, measurable, achievable, relevant, and Time-bound)); instead of fretting with the team over adjusting dates on a Gantt chart (task dependency graph) deadline, the team can

What about your testing approach makes it 'NOT TDD'?

How long should the pre-release static analysis and dynamic analyses take in my fancy DevOps CI TDD with optional CD? Can we release or deploy right now? Why or why not?

'We can't release today because we spent too much time arguing about quotes like "A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines." ("Self Reliance" 1841. Emerson) and we didn't spec out the roof trusses ahead of time because we're continually developing a new meeting format, so we didn't get to that, or testing the new thing, yet.'

A good programmer can answer the three questions in a regular meeting at any time, really:

> 1. What have you completed since the last meeting?

> 2. What do you plan to complete by the next meeting?

> 3. What is getting in your way?

And:

Can we justify refactoring right now for greater efficiency or additional functionality?

[+]

IMHO, it's so much easier to write good, comprehensive tests while writing the function (FUT: function under test) because that information is already in working memory.

It's also easier to adversarially write tests with a fresh perspective.

I shouldn't need to fuzz every parameter for every commit. Certainly for releases.

"Building an AppSec Pipeline: Keeping your program, and your life, sane" https://www.owasp.org/index.php/OWASP_AppSec_Pipeline

[+]
[+]

> TDD can help keep a developer focused - and this can help overall productivity rates - but it doesn't directly help lower defect rates.

We would need to reference some data with statistical power; though randomization and control are infeasible: no two teams are the same, no two projects are the same, no two objective evaluations of different apps' teams' defect rates are an apples to apples comparison.

Maybe it's the coverage expectation: do not add code that is not run by at least one test.

[-]

Handles are the better pointers

[+]

> The final optimization would have been to write a language that would define game entities in terms of the game components they were subject to and automatically generate the single class that was union of all possible types and would be a "row" in the table

django-typed-models https://github.com/craigds/django-typed-models

> polymorphic django models using automatic type-field downcasting

> The actual type of each object is stored in the database, and when the object is retrieved it is automatically cast to the correct model class

...

> the common thread was that a hierarchical OO structure ended up adding a lot of unneeded complexity for games that hindered flexibility as requirements changed or different behaviors for in-game entities were added.

So, in order to draw a bounding box for an ensemble of hierarchically/tree/graph-linked objects (possibly modified in supersteps for reproducibility), is an array-based adjacency matrix still fastest?

Are sparse arrays any faster for this data architecture?

[+]

ContentType.model_class(), models.Model.meta.abstract=True, django-reversion, django-guardian

IDK how to do partial indexes with the Django ORM? A simple filter(bool, rows) could probably significantly shrink the indexes for such a wide table.

Arrays are fast if the features/dimensions are known at compile time (if the TBox/schema is static). There's probably an intersection between object reference overhead and array copy costs.

Arrow (with e.g. parquet on disk) can help minimize data serialization/deserialization costs and maximize copy-free data interoperability (with columnar arrays that may have different performance characteristics for whole-scene transformation operations than regular arrays).

Many implementations of SQL ALTER TABLE don't have to create a full copy in order to add a column, but do require a permission that probably shouldn't be GRANTed to the application user and so online schema changes are scheduled downtime operations.

If you're not discovering new features at runtime and your access pattern is generally linear, arrays probably are the fastest data structure.

Hacker News also has a type attribute that you might say is used polymorphically: https://github.com/HackerNews/API/blob/master/README.md#item...

Types in RDF are additive: a thing may have zero or more rdf:type property instances. RDF quads can be stored in one SQL table like:

_id,g,s,p,o,xsd:datatype,xml:lang

... with a few compound indexes that are combinations of (s,p,o) so that triple pattern graph queries like (?s,?p,1) are fast. Partial indexes (SQLite, PostgreSQL,) would be faster than full-table indexes for RDF in SQL, too.

[-]

Neural scene representation and rendering

[+]
[+]

"Spatial memory" https://en.wikipedia.org/wiki/Spatial_memory

It may be splitting hairs, but I think the mammalian brain, at least, can simulate/remember/imagine additional 'dimensions' like X/Y/Z spin, derivatives of velocity like acceleration/jerk/jounce.

Is space 11 dimensional (M string theory) or 2 dimensional (holographic principle)? What 'dimensions' does the human brain process? Is this capacity innate or learned; should we expect pilots and astronauts to have learned to more intuitively cognitively simulate gravity with their minds?

[-]

Ask HN: Is there a taxonomy of machine learning types?

Besides classification and regression, and the unsupervised methods for principle components, clustering and frequent item-sets, what tools are there in the ML toolkit and what kinds of problems are amenable to their use?

[-]

Senator requests better https compliance at US Department of Defense [pdf]

The "Mozilla SSL Configuration Generator" has a checkbox for 'HSTS enabled?' and can generate SSL/TLS configs for Apache, Nginx, Lighttpd, HAProxy, AWS, ELB. https://mozilla.github.io/server-side-tls/ssl-config-generat...

You can select 'nginx', then 'modern', and then 'apache' for a modern Apache configuration.

Are the 'modern' configs FIPS compliant?

What browsers/tools does requiring TLS 1.3 break?

[-]

Banks Adopt Military-Style Tactics to Fight Cybercrime

> In a windowless bunker here, a wall of monitors tracked incoming attacks — 267,322 in the last 24 hours, according to one hovering dial, or about three every second — as a dozen analysts stared at screens filled with snippets of computer code.

> Cybercrime is one of the world’s fastest-growing and most lucrative industries. At least $445 billion was lost last year, up around 30 percent from just three years earlier, a global economic study found, and the Treasury Department recently designated cyberattacks as one of the greatest risks to the American financial sector.

Is this type of monitoring possible (necessary, even) with blockchains? Blockchains generally silently disregard bad/invalid transactions. Where could discarded/disregarded transactions and forks be reported to in a decentralized blockchain system? Who would pay for log storage? How redundantly replicated should which data be?

How DDOS resistant are centralized and decentralized blockchains?

Exchanges have risk. In terms of credit fraud: some crypto asset exchanges do allow margin trading, many credit card companies either refuse transactions with known exchanges or charge cash advance interest rates, and all transactions are final.

Exchanges hold private keys for customers' accounts, move a lot to offline cold storage, and maybe don't do a great job of explaining that YOU SHOULD NOT LEAVE MONEY ON AN EXCHANGE. One should transfer funds to a different account; such as a hardware or paper wallet or a custody service.

Do/can crypto asset exchanges participate in these exercises? To what extent do/can blockchains help solve for aspects of our unfortunately growing cybercrime losses?

Premined blockchains could reportedly handle card/chip/PIN transaction volumes today.

[-]

No, Section 230 Does Not Require Platforms to Be “Neutral”

> It’s foolish to suggest that web platforms should lose their Section 230 protections for failing to align their moderation policies to an imaginary standard of political neutrality. Trying to legislate such a “neutrality” requirement for online platforms—besides being unworkable—would be unconstitutional under the First Amendment.

... https://en.wikipedia.org/wiki/Section_230_of_the_Communicati...

Ask HN: Do battery costs justify “buy all sell all” over “net metering”?

Are batteries the primary justification for "buy all sell all" over "net metering"?

Are next-gen supercapacitors the solution?

> Ask HN: Do battery costs justify "buy all sell all" over "net metering"?

> Are batteries the primary justification for "buy all sell all" over "net metering"?

> Are next-gen supercapacitors the solution?

With "Net Metering", electric utilities buy consumers' excess generated energy at retail or wholesale rates. https://en.wikipedia.org/wiki/Net_metering

With "Buy All, Sell All", electric utilities require consumers to sell all of the energy they generate from e.g. solar panels (usually at wholesale prices, AFAIU) and buy all of the energy they consume at retail rates. They can't place the meter after any local batteries.

Do I have this right?

Net metering:

(used-generated) x (retail || wholesale)

Buy all, sell all:

(used x retail) - (generated x wholesale)

For the energy generating consumer, net metering is a better deal: they have power when the grid is down, and they keep or earn more for the energy generation capability they choose to invest in.

Break-even on solar panels happens sooner with net metering.

Utilities argue that: maintaining grid storage and transfer costs money, which justifies paying energy generating consumers less than they pay for more constant sources of energy like dams, wind farms, and commercial solar plants.

Building a two-way power transfer grid costs money. Batteries require replacement after a limited number of cycles. Spiky or bursting power generation is not good for batteries because they don't get a full cycle. [Hemp] supercapacitors can smooth out that load and handle many more partial charge and discharge cycles.

Is energy storage the primary justifying cost driver for "buy all, sell all"?

What investments are needed in order to more strongly incentivize clean energy generation? Do we need low cost supercapacitors to handle the spiky load?

Are these utilities granted a monopoly? Are they price fixing?

Energy demand from blockchain mining has not managed to keep demand constant so that utilities have profit to invest in clean energy generation and a two-way smart grid that accommodates spiky consumer energy generation. Demand for electricity is falling as we become less wasteful and more energy efficient. As the cost of renewable energy continues to fall (and become less expensive than nonrenewables), there should be more margin for energy utilities which cost-rationally and environmentally-rationally choose to buy renewable energy and sell it to consumers.

Please correct me with the appropriate terminology.

How can we more strongly incentivize consumer solar panel investments?

Here's a discussion about the lower costs of hemp supercapacitors as compared with graphene super capacitors: https://news.ycombinator.com/item?id=16800693

""" Hemp supercapacitors might be a good solution to the energy grid storage problem. Hemp absorbs carbon, doesn't leave unplowable roots in the fields, returns up to 70% of nutrients to the soil, and grows quickly just about anywhere. Hemp bast fiber is normally waste. Hemp anodes for supercapacitors are made from the bast fiber that is normally waste.

Graphene is very useful; but industrial production of graphene is dangerous because lungs and blood-brain barrier.

Hemp is an alternative to graphene for modern supercapacitors (which now have much greater [energy density] in wH/kg)

"Hemp Carbon Makes Supercapacitors Superfast” https://www.asme.org/engineering-topics/articles/energy/hemp...

> “Our device’s electrochemical performance is on par with or better than graphene-based devices,” Mitlin says. “The key advantage is that our electrodes are made from biowaste using a simple process, and therefore, are much cheaper than graphene.”

> Graphene is, however, expensive to manufacture, costing as much as $2,000 per gram. [...] developed a process for converting fibrous hemp waste into a unique graphene-like nanomaterial that outperforms graphene. What’s more, it can be manufactured for less than $500 per ton.

> Hemp fiber waste was pressure-cooked (hydrothermal synthesis) at 180 °C for 24 hours. The resulting carbonized material was treated with potassium hydroxide and then heated to temperatures as high as 800 °C, resulting in the formation of uniquely structured nanosheets. Testing of this material revealed that it discharged 49 kW of power per kg of material—nearly triple what standard commercial electrodes supply, 17 kW/kg.

https://scholar.google.com/scholar?hl=en&q=hemp+supercapacit....

https://en.wikipedia.org/wiki/Supercapacitor

I feel like a broken record mentioning this again and again. ""'

[-]

Portugal electricity generation temporarily reaches 100% renewable

mgdo | 2018-04-09 21:17:43 | 234 | # | ^
[+]
[+]

Hemp supercapacitors might be a good solution to the energy grid storage problem. Hemp absorbs carbon, doesn't leave unplowable roots in the fields, returns up to 70% of nutrients to the soil, and grows quickly just about anywhere.

Hemp bast fiber is normally waste. Hemp anodes for supercapacitors are made from the bast fiber that is normally waste.

Graphene is very useful; but industrial production of graphene is dangerous because lungs and blood-brain barrier.

Hemp is an alternative to graphene for modern supercapacitors (which now have much greater power density in wH/kg)

"Hemp Carbon Makes Supercapacitors Superfast” https://www.asme.org/engineering-topics/articles/energy/hemp...

> “Our device’s electrochemical performance is on par with or better than graphene-based devices,” Mitlin says. “The key advantage is that our electrodes are made from biowaste using a simple process, and therefore, are much cheaper than graphene.”

> Graphene is, however, expensive to manufacture, costing as much as $2,000 per gram. [...] developed a process for converting fibrous hemp waste into a unique graphene-like nanomaterial that outperforms graphene. What’s more, it can be manufactured for less than $500 per ton.

> Hemp fiber waste was pressure-cooked (hydrothermal synthesis) at 180 °C for 24 hours. The resulting carbonized material was treated with potassium hydroxide and then heated to temperatures as high as 800 °C, resulting in the formation of uniquely structured nanosheets. Testing of this material revealed that it discharged 49 kW of power per kg of material—nearly triple what standard commercial electrodes supply, 17 kW/kg.

https://scholar.google.com/scholar?hl=en&q=hemp+supercapacit...

https://en.wikipedia.org/wiki/Supercapacitor

I feel like a broken record mentioning this again and again.

[+]

> please correct your usage of power/energy density. Power density is measured in W/kg, energy density is measured in Wh/kg. Supercapacitors tend to excel in the former but be poor in the latter.

I'd update the units; good call. You may have that confused? Traditional supercapacitors have had lower power density and faster charging/discharging. Graphene and hemp somewhat change the game, AFAIU.

It makes sense to put supercapacitors in front of the battery banks because they last so many cycles and because they charge and discharge so quickly (a very helpful capability for handling spiky wind and solar loads).

[+]

I must have logically assumed that rate of charge and discharge include time (hours) in the unit: Wh/kg.

My understanding is that there's usually a curve over time t that represents the charging rate from empty through full.

[edit]

"C rate"

Battery_(electricity)#C_rate https://en.wikipedia.org/wiki/Battery_(electricity)#C_rate

Battery_charger#C-rates https://en.wikipedia.org/wiki/Battery_charger#C-rates

> Charge and discharge rates are often denoted as C or C-rate, which is a measure of the rate at which a battery is charged or discharged relative to its capacity. As such the C-rate is defined as the charge or discharge current divided by the battery's capacity to store an electrical charge. While rarely stated explicitly, the unit of the C-rate is [h^−1], equivalent to stating the battery's capacity to store an electrical charge in unit hour times current in the same unit as the charge or discharge current.

[+]

You know, I'm not sure. This article is from a few years ago now and there's not much uptake.

It may be that most people dismiss supercapacitors based on the stats for legacy (pre-graphene/pre-hemp) supercapacitors: large but quick and long-lasting.

It may be that hemp is taxed at up to 90% because it's a controlled substance in the US (but not in Europe, Canada, or China; where we must import shelled hemp seeds from). A historical accident?

[-]

GPU Prices Drop ~25% in March as Supply Normalizes

How do these new GPUs compare to those from 10 years ago in terms of FLOPs per Watt? https://en.wikipedia.org/wiki/Performance_per_watt

The new ASICs for Ethereum mining can't be solely responsible for this percent of the market.

(Note that NVIDIA's stock price is up over 1700% over the past 10 years. And that Bitcoin mining on CPUs and GPUs hasn't been profitable for quite awhile. In 2007, I don't think we knew that hashing could be done on GPUs; though there were SSL accelerator cards that were mighty expensive)

[-]

Apple says it’s now powered by renewable energy worldwide

[+]
[+]
[+]
[+]
[+]

100% renewable energy by purchasing and funding renewable energy is an outstanding acheivement.

Is there another statistic for measuring how many KWhr or MWhr are sourced directly from renewable energy sources (or, more logically, 'directly' from batteries + hemp supercapacitors between use and generation)?

[-]

Hackers Are So Fed Up with Twitter Bots They’re Hunting Them Down Themselves

[+]

There's an open call for papers/proposals for handling the deluge. "Funding will be provided as an unrestricted gift to the proposer's organization(s)" ... "Twitter Health Metrics Proposal Submission" https://blog.twitter.com/official/en_us/topics/company/2018/...

[+]

Are you suggesting that Mastodon has a better system for identifying harassment, spam, and spam accounts? Or that, given that they're mostly friendly early adopters, they haven't yet encountered the problem?

[+]

Mastodon is a federated system like StatusNet/GNU Social.

So, in your opinion, Mastodon nodes - by virtue of being federated - would be better equipped to handle the spam and harassment volume that Twitter is subject to?

I find that hard to believe.

ActivityPub (and OStatus, and ActivityStreams/Salmon, and OpenSocial) are all great specs and great ideas. Hosting and moderation cost real money (which spammers/scammers are wasting).

Know what's also great? Learning. For learning, we have the xAPI/TinCan spec and also schema.org/Action.

[-]

“We’re committing Twitter to increase the health and civility of conversation”

First Amendment protections apply to suits brought by the government. Civil suits are required to prove damages ("quantum of loss").

There are many open platforms. (I've contributed to those as well). Some are built on open standards. None of said open platforms have procedures or resources for handling the onslaught of disrespectful trash that the people we've raised eventually use these platforms for communicating at other people who have feelings and understand the Golden Rule.

https://en.wikipedia.org/wiki/Golden_Rule

The initial early adopters (who have other better things to do) are fine: helpful, caring, critical, respectful; healthy. And then everyone else comes surging in with hate, disrespect, and vitriol; unhealthy. They don't even realize that being hateful and disrespectful is making them more depressed. They think that complaining and talking smack to people is changing the world. And then they turn off the phone or log out of the computer, and carry on with their lives.

No-one taught them to be the positive, helpful energy they want to attract from the world. No-one properly conditioned them to either respectfully disagree according to the data or sit down and listen. No-one explained to them that a well-founded argument doesn't fit in 140 or 280 characters, but a link and a headline do. No-one explained to them that what they write on the internet lasts forever and will be found by their future interviewers, investors, jurors, and voters. No-one taught them that being respectful and helpful in service of other people - of the group's success, of peaceful coexistence - is the way to get ahead AND be happy. "No-one told me that."

Shareholders of public corporations want to see growth in meaningless numbers, foreign authoritarian governments see free expression as a threat to their ever-so-fragile self-perceptions, political groups seek to frame and smear and malign and discredit (because they are so in need of group acceptance; because money still isn't making them happy), and there are children with too much free time reading all of these.

No-one is holding these people accountable: we need transparency and accountability. We need to focus on more important goals and feel good about helping; about volunteering our time to help others be happier.

Instead, now that these haters and scam artists have all self-identified, we must spend our time conditioning their communications until they learn to respectfully disagree on facts and data or go somewhere else. "That's how you feel? Great. How does that make your victim feel?" is the confrontation that some people are seeking from companies that set out to serve free speech and provide a forum for citizens to share the actual news.

Who's going to pay for that? Can they sue for their costs and losses? Advertisers do not want a spot next to hateful and disrespectful.

"How dare you speak of censorship in such veiled terms!?" Really? They're talking about taking down phrases like "kill" and "should die"; not phrases like "I disagree because:"

So, now, because there are so many hateful economically disadvantaged people in the world with nothing better to do and no idea how to run a business or keep a job with benefits, these companies need to staff 24 hour a day censors to take down the hate and terror and gang recruiting within one hour. What a distorted mirror of our divisively fractured wealth inequality, indeed.

"Ban gangs ASAP, please: they'll just go away"

How much does it cost to pay prison labor to redundantly respond to this trash? Are those the skills they need to choose a different career with benefits and savings that meet or exceed inflation when they get out?

What is the procedure for referring threats of violence to justice in your jurisdiction? Are there wealthy individuals in your community who would love to contribute resources to this effort? Maybe they have some region-specific pointers for helping the have-nots out here trolling like it's going to get them somewhere they want to be in life?

Let me share a little story with you:

A person walks into a bar/restaurant, flicks off the bartender/waiter, orders 5 glasses of free water, starts plastering ads to the walls and other peoples' tables, starts making threats to groups of people cordially conversing, and walks out.

[-]

Gitflow – Animated in React

Thanks! A command log would be really helpful too.

The HubFlow docs contain GitFlow docs and some really helpful diagrams: https://datasift.github.io/gitflow/IntroducingGitFlow.html

I change the release prefix to 'v' so that the git tags for the release look like 'v0.0.1' and 'v0.1.0':

  git config --replace-all gitflow.prefix.versiontag v
  git config --replace-all hubflow.prefix.versiontag v
I usually use HubFlow instead of GitFlow because it requires there to be a Pull Request; though GitFlow does work when offline / without access to GitHub.

[+]
[-]

Ask HN: How feasible is it to become proficient in several disciplines?

For example to become a professional in:

- back-end api development

- DevOps

- Data Engineer (big data, data science, ML, etc)

It is feasible, though as with any type of specialization, you're then a "jack of all trades, master of none". Maybe a title like "Full Stack Data Engineer" would be descriptive.

You could write an OAuth API for accepting and performing analysis of datasets (model fitting / parameter estimation; classification or prediction), write a test suite, write Kubernetes YAML for a load-balanced geodistributed dev/test/prod architecture, and continuously deploy said application (from branch merges, optionally with a manual confirmation step; e.g. with GitLab CI) and still not be an actual Data Engineer.

[-]

After rising for 100 years, electricity demand is flat

[+]

> Seems that power companies should encourage consumers to mine Bitcoin. Problem solved.

Blockchains will likely continue to generate considerable demand for electricity for the foreseeable future.

Blockchain firms can locate where energy is cheapest. Currently that's in countries where energy prices go negative due to excess capacity and insufficient energy storage resources (batteries, [hemp/graphene] supercapacitors, water towers).

With continued demand, energy companies can continue to invest in new clean energy generation alternatives.

Unfortunately, in the current administration's proposed budget, funding for ARPA-E is cancelled and allocated to clean coal; which Canada, France, and the UK are committed to phasing out entirely by ~2030.

[-]

Levi Strauss to use lasers instead of people to finish jeans

> The firm says the new techniques will reduce chemical use and make the way in which jeans are faded, distressed and ripped more efficient.

Yes, but can they make them as comfortable as this pair I've been working on for many years?

Can they sew/weave cool patches in?

[-]

Scientists use an atomic clock to measure the height of a mountain

Quantum_clock#More_accurate_experimental_clocks: https://en.wikipedia.org/wiki/Quantum_clock#More_accurate_ex...

> In 2015 JILA evaluated the absolute frequency uncertainty of their latest strontium-87 optical lattice clock at 2.1 × 10−18, which corresponds to a measurable gravitational time dilation for an elevation change of 2 cm (0.79 in) on planet Earth that according to JILA/NIST Fellow Jun Ye is "getting really close to being useful for relativistic geodesy".

AFAIU, this type of geodesy isn't possible with 'normal' time structs. Are nanoseconds enough?

"[Python-Dev] PEP 564: Add new time functions with nanosecond resolution" https://mail.python.org/pipermail/python-dev/2017-October/14...

[+]
[-]

Resources to learn project management best practices?

My side project is beginning to attract interest from a few people who would like to hop on board. At this point I am just doing what feels familiar and sensible, but the project manager perspective is new to me. Are there any sort of articles/books/podcasts/etc that could clue me into how to become better at it?

Project Management: https://wrdrd.github.io/docs/consulting/software-development... ... #requirements-traceability, #work-breakdown-structure (Mission, Project, Goal/Objective #n; Issue #n, - [ ] Task)

"Ask HN: How do you, as a developer, set measurable and actionable goals?" https://westurner.github.io/hnlog/#story-15119635

- Burndown Chart, User Stories

... GitHub and GitLab have milestones and reorderable issue boards. I still like https://waffle.io for complexity points; though you can also just create labels for e.g. complexity (Complexity-5) and priority (Priority-5).

[-]

Ask HN: Thoughts on a website-embeddable, credential validating service?

Reading Troy Hunt's password release V2 blog post [0], I came across the NIST recommendation to prevent users from creating accounts with passwords discovered in data breaches. This got me thinking: would a website admin (ex. small business owner with a custom website) benefit from a service that validates user passwords? The idea is to create a registration iframe with forms for email, password, etc., which would check hashed credentials against a database of data from breaches. Additionally, client-side validation would enforce rules recommended by the NIST's Digital Identity Guidelines [1], which would relieve admins from implementing their own rules. I'm sure there are additional security features that can be added.

1. Have you seen a need for this type of service, and could you see this being adopted at all?

2. Do you know of a service like this? I've looked, no hits so far.

3. Does the architecture seem sound?

[0]: https://www.troyhunt.com/ive-just-launched-pwned-passwords-version-2/

[1]: https://www.nist.gov/itl/tig/projects/special-publication-800-63

blockchain-certificates/cert-verifier-js: https://github.com/blockchain-certificates/cert-verifier-js

> A library to enable parsing and verifying a Blockcert. This can be used as a node package or in a browser. The browserified script is available as verifier.js.

https://github.com/blockchain-certificates/cert-issuer

> The cert-issuer project issues blockchain certificates by creating a transaction from the issuing institution to the recipient on the Bitcoin blockchain that includes the hash of the certificate itself.

... We could/should also store X.509 cert hashes in a blockchain.

[+]

Are you asking me why blockcerts stores certs in a blockchain?

Or whether using certs (really long passwords) is a better option than submitting unhashed passwords on a given datetime to a third-party in order to make sure they're not in the pwned passwords tables?

[+]

Known Traveler Digital Identity system is a "new model for airport screening and security that uses biometrics, cryptography and distributed ledger technologies."

Blockcerts are for academic credentials, AFAIU.

[EDIT]

Existing blockchains have a limited TPS (transactions per second) for writes; but not for reads. Sharding and layer-2 (sidechains) do not have the same assurances. I'm sure we all remember how cryptokitties congested the txpool during the Bitcoin futures launch.

[+]
[-]

Ask HN: What's the best algorithms and data structures online course?

These aren't courses, but from answers to "Ask HN: Recommended course/website/book to learn data structure and algorithms" :

Data Structure: https://en.wikipedia.org/wiki/Data_structure

Algorithm:https://en.wikipedia.org/wiki/Algorithm

Big O notation:https://en.wikipedia.org/wiki/Big_O_notation

Big-O Cheatsheet: http://bigocheatsheet.com

Coding Interview University > Data Structures: https://github.com/jwasham/coding-interview-university/blob/...

OSSU: Open Source Society University > Core CS > Core Theory > "Algorithms: Design and Analysis, Part I" [&2] https://github.com/ossu/computer-science/blob/master/README....

"Algorithms, 4th Edition" (2011; Sedgewick, Wayne): https://algs4.cs.princeton.edu/

Complexity Zoo > Petting Zoo (P, NP,): https://complexityzoo.uwaterloo.ca/Petting_Zoo

While perusing awesome-awesomeness [1], I found awesome-algorithms [2] , algovis [3], and awesome-big-o [4].

[1] https://github.com/bayandin/awesome-awesomeness

[2] https://github.com/tayllan/awesome-algorithms

[3] https://github.com/enjalot/algovis

[4] https://github.com/okulbilisim/awesome-big-o

[-]

Using Go as a scripting language in Linux

I, too, didn't realize that shebang parsing is implemented in the `binfmt_script` kernel module.

Does this persist across reboots?

  echo ':golang:E::go::/usr/local/bin/gorun:OC' | sudo tee /proc/sys/fs/binfmt_misc/register

[+]
[-]

Guidelines for enquiries regarding the regulatory framework for ICOs [pdf]

This is a helpful table indicating whether a Payment, Utility, Asset, or Hybrid coin/token: is a security, qualifies under Swiss AML payment law.

The "Minimum information requirements for ICO enquiries" appendix seems like a good set of questions for evaluating ICOs. Are there other good questions to ask when considering whether to invest in a Payment, Utility, Asset, or Hybrid ICO?

Are US regulations different from these clear and helpful regulatory guidelines for ICOs in Switzerland?

[+]
[-]

The Benjamin Franklin method for learning more from programming books

> Read your programming book as normal. When you get to a code sample, read it over

> Then close the book.

> Then try to type it up.

According to a passage in "The Autobiography of Benjamin Franklin" (1791) regarding re-typing from "The Spectator"

https://en.wikipedia.org/wiki/The_Autobiography_of_Benjamin_...

EBook: http://www.gutenberg.org/ebooks/148

[-]

Avoiding blackouts with 100% renewable energy

I notice that cases A and C require batteries for storage.

Should there be a separate entry for new gen supercapacitors? Supercapacitors built with both graphene and hemp have different Max Charge Rate (GW), Max Discharge Rate (GW), and Storage (TWh) capacities than even future-extrapolated batteries and current supercapacitors.

https://en.wikipedia.org/wiki/Supercapacitor

The cost and capabilities stats in this article look very promising:

"Hemp Carbon Makes Supercapacitors Superfast” https://www.asme.org/engineering-topics/articles/energy/hemp...

> “Our device’s electrochemical performance is on par with or better than graphene-based devices,” Mitlin says. “The key advantage is that our electrodes are made from biowaste using a simple process, and therefore, are much cheaper than graphene.”

> Graphene is, however, expensive to manufacture, costing as much as $2,000 per gram. [...] developed a process for converting fibrous hemp waste into a unique graphene-like nanomaterial that outperforms graphene. What’s more, it can be manufactured for less than $500 per ton.

> Hemp fiber waste was pressure-cooked (hydrothermal synthesis) at 180 °C for 24 hours. The resulting carbonized material was treated with potassium hydroxide and then heated to temperatures as high as 800 °C, resulting in the formation of uniquely structured nanosheets. Testing of this material revealed that it discharged 49 kW of power per kg of material—nearly triple what standard commercial electrodes supply, 17 kW/kg.

https://scholar.google.com/scholar?hl=en&q=hemp+supercapacit...

To be clear, supercapacitors are an alternative to li-ion batteries.

"Matching demand with supply at low cost in 139 countries among 20 world regions with 100% intermittent wind, water, and sunlight (WWS) for all purposes" (Renewable Energy, 2018) https://web.stanford.edu/group/efmh/jacobson/Articles/I/Comb...

[-]

Ask HN: What are some common abbreviations you use as a developer?

These are called 'codelabels'. They're great for prefix-tagging commit messages, pull requests, and todo lists:

BLD: build

BUG: bug

CLN: cleanup

DOC: documentation

ENH: enhancement

ETC: config

PRF: performance

REF: refactor

RLS: release

SEC: security

TST: test

UBY: usability

DAT: data

SCH: schema

REQ: requirement

REQ: request

ANN: announcement

STORY: user story

EPIC: grouping of user stories

There's a table of these codelabels here: https://wrdrd.github.io/docs/consulting/software-development...

Someday TODO FIXME XXX I'll get around to:

- [ ] DOC: create a separate site/organization for codelabels

- [ ] ENH: a tool for creating/renaming GitHub labels with unique foreground and background colors

YAGNI: Ya' ain't gonna need it

LOL, lulz

DRY: Don't Repeat Yourself

KISS: Keep It Super Simple

MVC: Model-View-Controller

MVT: Model-View-Template

MVVM: Model-View-View-Model

UI: User Interface

UX: User Experience

GUI: Graphical User Interface

CLI: Command Line Interface

CAP: Consistency, Availability, Partition tolerance

DHT: Distributed Hash Table

ETL: Extract, Transform, and Load

ESB: Enterprise Service Bus

MQ: Message Queue

VM: Virtual Machine

LXC: Linux Containers

[D]VCS, RCS: [Distributed] Version/Revision Control System

XP: Extreme Programming

CI: Continuous Integration

CD: Continuous Deployment

TDD: Test-Driven Development

BDD: Behavior-Driven Development

DFS, BFS: Depth/Breadth First Search

CRM: Customer Relationship Management

CMS: Content Management System

LMS: Learning Management System

ERP: Enterprise Resource Planning system

HTTP: Hypertext Transfer Protocol

HTTP STS: HTTP Strict Transport Security

REST: Representational State Transfer

API: Application Programming Interface

HTML: Hypertext Markup Language

DOM: Document Object Model

LD: Linked Data

LOD: Linked Open Data

URI: Uniform Resource Indicator

URN: Uniform Resource Name

URL: Uniform Resource Locator

UUID: Universally Unique Identifier

RDF: Resource Description Format

RDFS: RDF Schema

OWL: Web Ontology Language

JSON-LD: JSON Linked Data

JSON: JavaScript Object Notation

CSVW: CSV on the Web

CSV: Comma Separated Values

CIA: Confidentiality, Integrity, Availability

ACL: Access Control List

RBAC: Role-Based Access Control

MAC: Mandatory Access Control

CWE: Common Weakness Enumeration

CVE: Common Vulnerabilities and Exposures

XSS: Cross-Site Scripting

CSRF: Cross-Site Request Forgery

SQLi: SQL Injection

ORM: Object-Relational Model

AUC: Area Under Curve

ROC: Receiver Operating Characteristic

DL: Description Logic

RL: Reinforcement Learning

CNN: Convolutional Neural Network

DNN: Deep Neural Network

IS: Information Systems

ROI: Return on Investment

RPU: Revenue per User

MAU: Monthly Active Users

DAU: Daily Active Users

STEM: Science, Technology, Engineering, Mathematics/Medicine

STEAM: STEM + Arts

W3C: World-Wide-Web Consortium

GNU: GNU's not Unix

WRDRD: WRD R&D

... The Sphinx ``.. index::`` directive makes it easy to include index entries for acronym forms, too https://wrdrd.github.io/docs/genindex

[-]

There Might Be No Way to Live Comfortably Without Also Ruining the Planet

"A good life for all within planetary boundaries" (2018) https://www.nature.com/articles/s41893-018-0021-4

> Abstract: Humanity faces the challenge of how to achieve a high quality of life for over 7 billion people without destabilizing critical planetary processes. Using indicators designed to measure a ‘safe and just’ development space, we quantify the resource use associated with meeting basic human needs, and compare this to downscaled planetary boundaries for over 150 nations. We find that no country meets basic needs for its citizens at a globally sustainable level of resource use. Physical needs such as nutrition, sanitation, access to electricity and the elimination of extreme poverty could likely be met for all people without transgressing planetary boundaries. However, the universal achievement of more qualitative goals (for example, high life satisfaction) would require a level of resource use that is 2–6 times the sustainable level, based on current relationships. Strategies to improve physical and social provisioning systems, with a focus on sufficiency and equity, have the potential to move nations towards sustainability, but the challenge remains substantial.

> "Radical changes are needed if all people are to live well within the limits of the planet," [...]

> "These include moving beyond the pursuit of economic growth in wealthy nations, shifting rapidly from fossil fuels to renewable energy, and significantly reducing inequality.

> "Our physical infrastructure and the way we distribute resources are both part of what we call provisioning systems. If all people are to lead a good life within the planet's limits then these provisioning systems need to be fundamentally restructured to allow for basic needs to be met at a much lower level of resource use."

Perhaps ironically, our developments in service of sustainability (resource efficiency) needs for a civilization on Mars are directly relevant to solving these problems on Earth.

Recycle everything.

Survive without soil, steel, hydrocarbons, animals, oxygen.

Convert CO2, sunlight, H20, and geothermal energy to forms necessary for life.

https://en.wikipedia.org/wiki/Colonization_of_Mars

Algae, carbon capture, carbon sequestration, lab grown plants, water purification, solar power, [...]

Mars requires a geomagnetic field in order to sustain an atmosphere in order to [...].

"The Limits to Growth" (1972, 2004) [1] very clearly forecasts these same unsustainable patterns of resource consumption: 'needs' which exceed and transgress our planetary biophysical boundaries.

The 17 UN Sustainable Development Goals (#GlobalGoals) [2] outline our worthwhile international objectives (Goals, Targets, and Indicators). The Paris Agreement [3] sets targets and asks for commitments from nation states (and businesses) to help achieve these goals most efficiently and most sustainably.

In the US, the Clean Power Plan [4] was intended to redirect our national resources toward renewable energy with far less external costs. Direct and indirect subsidies for nonrenewables are irrational. Are subsidies helpful or necessary to reach production volumes of renewable energy products and services?

There are certainly financial incentives for anyone who chooses to invest in solving for the Global Goals; and everyone can!

[1] https://en.wikipedia.org/wiki/The_Limits_to_Growth

[2] http://www.un.org/sustainabledevelopment/sustainable-develop...

[3] https://en.wikipedia.org/wiki/Paris_Agreement

[4] https://en.wikipedia.org/wiki/Clean_Power_Plan

[-]

Multiple GWAS finds 187 intelligence genes and role for neurogenesis/myelination

> We found evidence that neurogenesis and myelination—as well as genes expressed in the synapse, and those involved in the regulation of the nervous system—may explain some of the biological differences in intelligence.

re: nurture, hippocampal plasticity and hippocampal neurogenesis also appear to be affected by dancing and omega-3,6 (which are transformed into endocannabinoids by the body): https://news.ycombinator.com/item?id=15109698

[-]

Could we solve blockchain scaling with terabyte-sized blocks?

These numbers in a computational model (or even Jupyter notebooks) would be useful.

We may indeed need fractional satoshis ('naks').

With terabyte blocks, lightning network would be unnecessary: at least for TPS.

There will need to be changes to account for quantum computing capabilities somewhere in the future timeline of Bitcoin (and everything else in banking and value-producing industry). Probably maybe a different hash function instead of just a routine difficulty increase (and definitely something other than ECDSA, which isn't a primary cost). $1.3m/400k a year to operate a terabyte mining rig with 50Gbps bandwidth would affect decentralization; though maybe not any more than it already is affected now.

https://en.bitcoin.it/wiki/Weaknesses#Attacker_has_a_lot_of_... (51%)

Confidence intervals for these numbers would be useful.

Casper PoS and beyond may also affect future Bitcoin volume estimates.

[-]

Ask HN: Do you have ADD/ADHD? How do you manage it?

Also, how has it affected your CS career? I feel that transitioning to management would help, as it does not require lengthy periods of concentration, but rather distributed attention for shorter periods.

Music. Headphones. Chillstep, progressive, chillout etc. from di.fm. Long mixes from SoundCloud with and without vocals. "Instrumental"

Breathe in through the nose and out through the mouth.

Less sugar and processed foods. Though everyone has a different resting glucose level.

Apparently it's called alpha-pinene.

Fidget things. Rubberband, paperclip.

The Pomodoro Technique: work 25 minutes, chill for 5 (and look at something at least 20 feet away (20-20-20 rule))

Lists. GTD. WBS.

Exercise. Short walks.

[-]

Ask HN: How to understand the large codebase of an open-source project?

Hello All!

what are techniques you all used to learn and understand a large codebase? what are the tools you use?

Write the namespace outline out by hand on a whiteboard or a sheet of paper.

Use a static analyzer to build a graph of the codebase.

Build an adjacency list and a graph of the imports; and topologically + (…) sort.

[-]

What is the best way to learn to code from absolute scratch?

We have been hosting a Ugandan refugee in our home in Oakland for the past 9 months and he wants to learn how to code.

Where is the best place for him to start from absolute scratch? What resources can we point him to? Who can help?

Here's an answer to a similar question: "Ask HN: How to introduce someone to programming concepts during 12-hour drive?" https://news.ycombinator.com/item?id=15454421

https://learnxinyminutes.com/docs/python3/ (Python3)

https://learnxinyminutes.com/docs/javascript/ (Javascript)

https://learnxinyminutes.com/docs/git/ (Git)

https://learnxinyminutes.com/docs/markdown/ (Markdown)

Read the docs. Read the source. Write docstrings. Write automated tests: that's the other half of the code.

Keep a journal of your knowledge as e.g. Markdown or ReStructuredText; regularly pull the good ones from bookmarks and history into an outline.

I keep a tools reference doc with links to Wikipedia, Homepage, Source, Docs: https://wrdrd.github.io/docs/tools/

And a single-page log of my comments: https://westurner.github.io/hnlog/

> To get a job, "Coding Interview University": https://github.com/jwasham/coding-interview-university

[+]
[-]

Tesla racing series: Electric cars get the green light – Roadshow

Tesla Racing Circuit ideas for increasing power discharge rate, reducing heat, and reducing build weight:

Hemp supercapacitors (similar power density as graphene supercapacitors and li-ion, lower cost than graphene)

Active cooling. Modified passive cooling.

Biocomposite frame and panels (stronger and lighter than steel and aluminum (George Washington Carver))

> Biocomposite frame and panels (stronger and lighter than steel and aluminum (George Washington Carver))

"Soybean Car" (1941) https://en.wikipedia.org/wiki/Soybean_car

[-]

What happens if you have too many jupyter notebooks?

These days there is a tendency in data analysis to use Jupyter Notebooks. But what happens if you have too many jupyter notebooks? For example, there are more than a hundred.

Actually, you start creating some modules. However, it is less convenient to work with them compared to what was before. It happens that you should code in web interface, somewhere in similar to the notepad++ form or you should change your IDLE.

Personally, I work in Pycharm and so far I couldn't assess remote interpreter or VCS. It is because pickle files or word2vec weighs too much (3gb+) and so I don't want to download/upload them. Also Jupyter is't cool in pycharm.

Do you have better practices in your companies? How to correctly adjust IDLE? Do you know about any possible substitution for the IPython notebook in the world of data analysis?

> what happens if you have too many jupyter notebooks? For example, there are more than a hundred.

Like anything else, Jupyter Notebook is limited by the CPU and RAM of the system hosting the Tornado server and Jupyter kernels.

At 100 notebooks (or even just one), it may be a good time to factor common routines into a packaged module with tests and documentation.

It's actually possible (though inefficient) to import code from Jupyter notebooks with ipython/ipynb (pypi:ipynb): https://github.com/ipython/ipynb ( https://jupyter-notebook.readthedocs.io/en/stable/examples/N... )

> Actually, you start creating some modules. However, it is less convenient to work with them compared to what was before. It happens that you should code in web interface, somewhere in similar to the notepad++ form or you should change your IDLE.

The Spyder IDE has support for .ipynb notebooks converted to .py (which have the IPython prompt markers in them). Spyder can connect an interpreter prompt to a running IPython/Jupyter kennel. There's also a Spyder plugin for Jupyter Notebook: https://github.com/spyder-ide/spyder-notebook

> Personally, I work in Pycharm and so far I couldn't assess remote interpreter or VCS. It is because pickle files or word2vec weighs too much (3gb+) and so I don't want to download/upload them.

Remote data access times can be made faster by increasing the space efficiency of the storage format, increasing the bandwidth of the connection, moving the data to the code, or moving the code to the data.

> Do you have better practices in your companies?

There are a number of [Reproducible] Data Science cookiecutter templates which have a directory for notebooks, module packaging, and Sphinx docs: https://cookiecutter.readthedocs.io/en/latest/readme.html#da...

Refactoring increases testability and code reuse.

> How to correctly adjust IDLE?

I don't think I understand the question?

"Configuring IPython" https://ipython.readthedocs.io/en/stable/config/index.html

Jupyter > "Installation, Configuration, and Usage" https://jupyter.readthedocs.io/en/latest/projects/content-pr...

> Do you know about any possible substitution for the IPython notebook in the world of data analysis?

From https://en.wikipedia.org/wiki/Notebook_interface :

> > "Examples of the notebook interface include the Mathematica notebook, Maple worksheet, MATLAB notebook, IPython/Jupyter, R Markdown, Apache Zeppelin, Apache Spark Notebook, and the Databricks cloud."

There are lots of Jupyter kernels for different tools and languages (over 100; including for other 'notebook interfaces'): https://github.com/jupyter/jupyter/wiki/Jupyter-kernels

And there are lots of Jupyter integrations and extensions: https://github.com/quobit/awesome-python-in-education/blob/m...

[-]

Cancer ‘vaccine’ eliminates tumors in mice

The article is about this study:

"Eradication of spontaneous malignancy by local immunotherapy" http://stm.sciencemag.org/content/10/426/eaan4488

> In situ vaccination with low doses of TLR ligands and anti-OX40 antibodies can cure widespread cancers in preclinical models.

[-]

Boosting teeth’s healing ability by mobilizing stem cells in dental pulp

Tideglusib

https://en.wikipedia.org/wiki/Tideglusib

> "Promotion of natural tooth repair by small molecule GSK3 antagonists" https://www.nature.com/articles/srep39654

> [...] Here we describe a novel, biological approach to dentine restoration that stimulates the natural formation of reparative dentine via the mobilisation of resident stem cells in the tooth pulp.

This Biodegradable Paper Donut Could Let Us Reforest the Planet

"These drones can plant 100,000 trees a day" https://news.ycombinator.com/item?id=16260892

> Called the Cocoon, this simple invention protects seedlings from harsh arid climates and reduces the amount of water they need to thrive–and boosts their survival rate by as much as 80%.

[-]

Drones that can plant 100k trees a day

> It’s simple maths. We are chopping down about 15 billion trees a year and planting about 9 billion. So there’s a net loss of 6 billion trees a year.

[+]
[+]
[+]

"This Biodegradable Paper Donut Could Let Us Reforest The Planet" https://news.ycombinator.com/item?id=16261101

[+]
[-]

What are some YouTube channels to progress into advanced levels of programming?

There are some cool YouTube channel suggestions on https://news.ycombinator.com/item?id=16224165 But I wanted to know which of those are great to progress into advanced level of programming? Which of the channels teach advanced techniques?

[-]

Multiple issue and pull request templates

+1

Default: /ISSUE_TEMPLATE.md

/ISSUE_TEMPLATE/<name>.md</name>

Default: /PULL_REQUEST_TEMPLATE.md

/PULL_REQUEST_TEMPLATE/<name>.md</name>

[+]

Good call. I've updated the post.

[-]

Five myths about Bitcoin’s energy use

nvk | 2018-01-25 17:38:38 | 10 | # | ^
[+]

Proof of Work (Bitcoin*, ...), Proof of Stake (Ethereum Casper), Proof of Space, Proof of Research (GridCoin, CureCoin,)

Plasma (Ethereum) and Lightning Network (BitCoin (SHA256), Litecoin (scrypt),) will likely offload a significant amount of transaction volume and thereby reduce the kWh/transaction metrics.

> But electricity costs matter even more to a Bitcoin miner than typical heavy industry. Electricity costs can be 30-70% of their total costs of operation.

> [...] If Bitcoin mining really does begin to consume vast quantities of the global electricity supply it will, it follows, spur massive growth in efficient electricity production—i.e. the green energy revolution. Moore’s Law was partially a story about incredible advances in materials science, but it was also a story about incredible demand for computing that drove those advances and made semiconductor research and development profitable. If you want to see a Moore’s-Law-like revolution in energy, then you should be rooting for, and not against, Bitcoin. The fact is that the Bitcoin network, right now, is providing a $200,000 bounty every 10 minutes (the mining reward) to the person who can find the cheapest energy on the planet.

[+]

If the market had internalized the external health, environmental, and defense costs of nonrenewable energy, we would already have cheap, plentiful renewable energy. But we don't: the market is failing to optimize for factors other than margin. (New Keynesian economics admits market failure, but not non-rationality.)

So, (speculative_valuation - cost) is the margin. Whereas with a stock in a leveraged high-frequency market with shorting, (shareholder_equity - market_cap) is explainable in terms of the market information that is shared.

So, it's actually (~$200K-(n_kwhrs*cost_kwhr)) for whoever wins the block mining lottery (which is about every 10 minutes and can be anyone who's mining).

But the point about Bitcoin maintaining demand for and while we move to competitive lower cost renewable energy and greater efficiency is good.

What we should hope to see is the blockchain industry directly investing in clean energy capacity development in order to rationally minimize their primary costs and maximize environmental sustainability.

[+]

Yes, and then energy prices would decrease due to less demand. Blockchain energy usage maintains demand for energy; which keeps prices high enough that production of renewables can profitably compete with nonrenewables while we reach production volumes of solar, wind, and hemp supercapacitors for grid storage.

> Throughout the first half of 2008, oil regularly reached record high prices.[2][3][4][5] Prices on June 27, 2008, touched $141.71/barrel, for August delivery in the New York Mercantile Exchange [...] The highest recorded price per barrel maximum of $147.02 was reached on July 11, 2008.

At that price, there's more demand for renewables (such as electric vehicles and solar panels)

> Since late 2013 the oil price has fallen below the $100 mark, plummeting below the $50 mark one year later.

https://en.wikipedia.org/wiki/World_oil_market_chronology_fr...

... Energy costs and inflation are highly covariate. (Trouble is, CPI All rarely ever goes back down)

[+]
[+]

The block reward is an incentive for redundant distributed replica nodes.

[-]

Ask HN: Recommended course/website/book to learn data structure and algorithms

I am a full-time Android developer who does most of his programming work in Java. I am a non CS graduate so didn't study Data structure and algorithms course in university so I am not familiar with this subject which is hindering my prospect of getting better programming jobs. There are so many resources out there on this subject that I am unable to decide which one is the best for my case. Could someone please point me out in the right direction. Thanks.

Data Structure: https://en.wikipedia.org/wiki/Data_structure

Algorithm: https://en.wikipedia.org/wiki/Algorithm

Big O notation: https://en.wikipedia.org/wiki/Big_O_notation

Big-O Cheatsheet: http://bigocheatsheet.com

Coding Interview University > Data Structures: https://github.com/jwasham/coding-interview-university/blob/...

OSSU: Open Source Society University > Core CS > Core Theory > "Algorithms: Design and Analysis, Part I" [&2] https://github.com/ossu/computer-science/blob/master/README....

"Algorithms, 4th Edition" (2011; Sedgewick, Wayne): https://algs4.cs.princeton.edu/

While perusing awesome-awesomeness [1], I found awesome-algorithms [2] , algovis [3], and awesome-big-o [4].

[1] https://github.com/bayandin/awesome-awesomeness

[2] https://github.com/tayllan/awesome-algorithms

[3] https://github.com/enjalot/algovis

[4] https://github.com/okulbilisim/awesome-big-o

[-]

ORDO: a modern alternative to X.509

There are a number of W3C specs for this type of thing.

Linked Data Signatures (ld-signatures) relies upon a graph canonicalization algorithm that works with any RDF format (RDF/XML, JSON-LD, Turtle,)

> The signature mechanism can be used across a variety of RDF data syntaxes such as JSON-LD, N-Quads, and TURTLE, without the need to regenerate the signature

https://w3c-dvcg.github.io/ld-signatures/

A defined way to transform ORDO to RDF would be useful for WoT graph applications.

WebID can express X509 certs with the cert ontology. {cert:X509Certificate, cert:PGPCertificate,} rdfs:subClassOf cert:Certificate

https://www.w3.org/ns/auth/cert

https://www.w3.org/2005/Incubator/webid/spec/

ld-signatures is newer than WebID.

(Also, we should put certificates in a blockchain; just like Blockcerts (JSON-LD))

[-]

Wine 3.0 Released

Hopefully this fixes the text in the GMAT Prep app.

[-]

Kimbal Musk is leading a $25M mission to fix food in US schools

+1. The introduction to "Nudge: Improving Decisions about Health, Wealth, and Happiness" discusses how choices about food placement in cafeterias influence students' dietary decisions.

[-]

Spinzero – A Minimal Jupyter Notebook Theme

+1. The Computer Modern serif fonts look legit. Like LaTeX legit.

Now, if we could make the fonts unscalable and put things in two columns (in order to require extra scrolling and 36 character wide almost-compiling copy-and-pasted code samples without syntax highlighting) we'd be almost there!

[+]
[-]

What does the publishing industry bring to the Web?

Q: What does the publishing industry bring to the Web?

A: PDF hosting, comments, a community of experts

FWIU, Publishing@W3C proposes WPUB [1] instead of PDF or MHTML for 'publishing' http://schema.org/ScholarlyArticle .

How do WPUB canonical identifiers (which reference/redirect(?) to the latest version of the resource) work with W3C Web Annotations attached to e.g. sentences within a resource identified with a URI? When the document changes, what happens to the attached comments? This is also a problem with PDFs: with a filename like document-20180111-v01.pdf and a stable(!) URL like http://example.org/document-20180111-v01.pdf, we can add Web Annotations to that URI; but with a new URI, those annotations are lost.

[1] https://w3c.github.io/wpub/

[-]

Git is a blockchain

Bitcoin is very much inspired by git; though in terms of immutability it's more similar to mercurial and subversion (git push -f)

Git accepts whatever timestamp a node chooses to add to a commit. This can cause interesting sorts in terms of chronological and topological sort orders.

Without an agreed-upon central git server there is not a canonical graph.

You can use GPG signatures with Git, but you need to provide your own keyserver and then there's still no way to enforce permissions (e.g. who can ALTER, UPDATE, or DELETE which files).

Git is a directed acyclic graph (DAG). Not a chain. Blockchains are chains to prevent double-spending (e.g. on a different fork).

Bitcoin was accepted by The Linux Foundation (Linus Torvalds wrote Git): https://lists.linuxfoundation.org/pipermail/bitcoin-dev/

[+]
[-]

Show HN: Convert Matlab/NumPy matrices to LaTeX tables

LaTeX must be escaped in order to prevent LaTeX injection.

AFAIU, numpy.savetxt does not escape LaTeX characters?

Jupyter Notebook rich object display protocol checks for obj._repr_latex_() when converting a Jupyter notebook from .ipynb to LaTeX.

The Pandas _repr_latex_() function calls to_latex(escape=True ). https://github.com/pandas-dev/pandas/blob/master/pandas/core...

†* The default value of escape ️ (and a few other presentational parameters) is determined from the display.latex.escape option: https://pandas.pydata.org/pandas-docs/stable/options.html?hi... *

df = pd.read_csv('filename.csv', ); df.to_latex(escape=True)

Or, with a Jupyter notebook:

df = pd.read_csv('filename.csv', ); df

# $ jupyter convert --to latex filename.ipynb

Wouldn't it be great if there was a LaTeX incantation that allowed for specifying that the referenced dataset URI (maybe optionally displayed also as a table) is a premise of the analysis; with RDFa and/ or JSONLD in addition to LaTeX PDF? That way, an automated analysis tool could identify and at least retrieve the data for rigorous unbiased analyses.

http://schema.org/Dataset

http://schema.org/ScholarlyArticle

#StructuredPremises

[-]

NIST Post-Quantum Cryptography Round 1 Submissions

[+]

This paper lists a few of the practical concerns for quantum-resistant algos (and proposes an algo that wasn't submitted to NIST Post-Quantum Cryptography Round 1):

"Quantum attacks on Bitcoin, and how to protect against them" https://arxiv.org/abs/1710.10377 (~2027?)

A few Quantum Computing and Quantum Algorithm resources: https://news.ycombinator.com/item?id=16052193

Responsive HTML (arxiv-vanity/engrafo, PLoS,) or Markdown in a Jupyter notebook (stored in a Git repo with a tag and maybe a DOI from figshare or Zenodo) really would be far more useful than comparing LaTeX equations rendered into PDFs.

[-]

Gridcoin: Rewarding Scientific Distributed Computing

[+]

> Imagine the hash rate of the BTC network going towards some useful calculations.

https://curecoin.net

""" CureCoin Reaches #1 Ranking on Folding@home

As of the afternoon of August 29, 2017 (Eastern Time), the Curecoin Team 224497 earned the world's #1 rank on Stanford's Folding@home - a protein folding simulation Distributed Computing Network (DCN). In a little over 3 years, the team (including our merge-folding partners at Foldingcoin) collectively produced 160 billion points worth of molecular computations to support research in the areas of cancer, Alzheimer's, Huntington's, Parkinson's, Infectious Disease as well as helping scientists uncover new molecular dynamics through groundbreaking computational techniques. """

[+]
[+]
[+]
[+]

There's a pretty hard limit bounding the optimizability of SHA256. That's why hashcash uses a cryptographic hash function.

There may be - or, very likely are - shortcuts for proof of research better than Grover's; which, when found, will also be very useful for science and medicine. However, that advantage is theoretically destabilizing for a distributed consensus network; which is also a strange conflict in incentives.

Sort of like buying "buy gold" commercials when the market was heading into the worst recession since the Great Depression.

SSL accelerators may benefit from the SHA256 ASIC optimizations incentivized by the bitcoin design.

"""The accelerator provides the RSA public-key algorithm, several widely used symmetric-key algorithms, cryptographic hash functions, and a cryptographically secure pseudo-random number generator"""

GPU prices are also lower now; probably due to demand pulling volume. The TPS (transactions per second) rate is doing much better these days.

How would you solve the local daretime problem in order with Git and signatures?

[-]

Power Prices Go Negative in Germany

[+]
[+]
[+]
[+]
[+]
[+]

"Several countries in Europe have experienced negative power prices, including Belgium, Britain, France, the Netherlands and Switzerland."

> Yes, as does most media - especially in Germany. Those negative prices are no win for any german. Why else is it, that we will soon pay the highest prices for electricity in the world?

AFAIU, it's because you're aggressively shaping the energy market in order to reduce health and environmental costs now.

The technical issue here is that batteries are not good enough yet; and [hemp] supercapacitors are not yet at the volume needed to lower the costs. So, maintaining a high price for energy keeps the market competitive for renewables which have positive negative externalities.

Can the excess energy on certain days be converted back to money through cryptocurrency mining? (While society decides whether batteries are a crucial energy security investment)

[-]

Bitcoin is an energy arbitrage

js4 | 2017-12-20 10:43:31 | 51 | # | ^

In addition to relocating to where energy is the least expensive, Bitcoin creates incentive for miners to lower the local cost of energy: invest in renewable energy.

Renewable Energy / Clean Energy is now less expensive than alternatives; with continued demand, the margins are at least maintained.

> In addition to relocating to where energy is the least expensive, Bitcoin creates incentive for miners to lower the local cost of energy: invest in renewable energy.

We have lots of direct and effective subsides for nonrenewable energy in the United States. And some for renewables, as well. For example [1] average effective tax rate over all money making companies: 26%

"Coal & Related Energy": 0.69%

"Oil/Gas (integrated)": 8.01%

"Power": 29.22%

"Green and Renewable Energy": 26.42%

[1] "Tax Rates by Sector (US)" (January 2017) http://pages.stern.nyu.edu/~adamodar/New_Home_Page/datafile/...

X-posting here from the article's comments:

The price reflects the confidence investors have in the security's ability to meet or exceed inflation and in the information security of the network.

Volatility adds value for algo traders: say the prices are [1, 101, 51, 101, 51, 201]:

(101-1)+(101-51)+(201-51)=300

(201-1)=200

For the average Joe looking at the vested options they're hodling, though, volatility is unfriendly.

When e.g. algo-traders are willing to buy in when the price starts to fall, they're making liquidity; which some exchanges charge less for.

Enigma Catalyst (Zipline) is one way to backtest and live-trade cryptocurrencies algorithmically.

[-]

There are now more than 200k pending Bitcoin transactions

[+]
[+]

The OT link does say "Transactions Per Second 22.54".

The solutions for this 3 hour backlog of unconfirmed transactions include: implementing SegWit, increasing the blocksize, and Lightning Network.

[+]
[-]

What ORMs have taught me: just learn SQL (2014)

ORMs:

- Are maintainable by a team. "Oh, because that seemed faster at the time."

- Are unit tested: eventually we end up creating at least structs or objects anyway, and then that needs to be the same everywhere, and then the abstraction is wrong because "everything should just be functional like SQL" until we need to decide what you called "the_initializer2".

- Can make it very easy to create maintainable test fixtures which raise exceptions when the schema has changed but the test data hasn't.

- Prevent SQL injection errors by consistently parametrizing queries and appropriately quoting for the target SQL dialect. (One of the Top 25 most frequent vulnerabilities). This is especially important because most apps GRANT both UPDATE and DELETE; if not CREATE TABLE and DROP TABLE to the sole app account.

- Make it much easier to port to a new database; or run tests with SQLite. With raw SQL, you need the table schema in your head and either comprehensive test coverage or to review every single query (and the whole function preceding db.execute(str, *params))

- May be the performance bottleneck for certain queries; which you can identify with code profiling and selectively rewrite by hand if adding an index and hinting a join or lazifying a relation aren't feasible with the non-SQLAlchemy ORM that you must use.

- Should provide a way to generate the query at dev or compile-time.

- Should make it easy to DESCRIBE the query plans that code profiling indicates are worth hand-optimizing (learning SQL is sometimes not the same as learning how a particular database plans a query over tables without indexes)

- Make managing db migrations pretty easy.

- SQLAlchemy really is great. SQLAlchemy has eager loading to solve the N+1 query problem. Django is often more than adequate; and has had prefetch_related() to solve the N+1 query problem since 1.4. Both have an easy way to execute raw queries (that all need to be reviewed for migrations). Both are much better at paging without allocating a ton of RAM for objects and object attributes that are irrelevant now.

- Make denormalizing things from a transactional database with referential integrity into JSON really easy; which webapps and APIs very often need to do.

Is there a good JS ORM? Maybe in TypeScript?

[+]
[+]
[-]

Show HN: An educational blockchain implementation in Python

jre | 2017-12-17 07:32:06 | 412 | # | ^
[+]
[+]
[+]
[+]

For deterministic serialization (~canonicalization), you can use sort_keys=True or serialize OrderedDicts. For deseialization, you'd need object_pairs_hook=collections.OrderedDict.

Most current blockchains sign a binary representation with fixed length fields. In terms of JSON, JSON-LD is for graphs and it can be canonicalized. Blockcerts and Chainpoint are JSON-LD specs:

> Blockcerts uses the Verifiable Claims MerkleProof2017 signature format, which is based on Chainpoint 2.0.

https://github.com/blockchain-certificates/cert-verifier-js/...

[+]
[+]

It's now the spec for 3.6+.

> #python news: @gvanrossum just pronounced that dicts are now guaranteed to retain insertion order. This is the end of a long journey.

https://twitter.com/raymondh/status/941709626545864704

More here: https://www.reddit.com/r/Python/comments/7jyluw/dict_knownor...

OrderedDicts are backwards-compatible and are guaranteed to maintain order after deletion.

Thanks! Simplest explanation I've seen.

Here's an nbviewer link (which, like base58, works on/over a phone): https://nbviewer.jupyter.org/github/julienr/ipynb_playground...

Note that Bitcoin does two rounds of SHA256 rather than one round of MD5. There's also a "P2P DHT" (peer-to-peer distributed hash table) for storing and retrieving blocks from the blockchain; instead of traditional database multi-master replication and secured offline backups.

> ERROR:root:Invalid transaction signature, trying to spend someone else's money ?

This could be more specific. Where would these types of error messages log to?

My mistake, it's BitTorrent that has a DHT. Instead of finding the most network local peer with the block identified by a (prev_hash, hash) hash table key, the Bitcoin blockchain broadcasts all messages to all nodes; which must each maintain a complete backup of the entire blockchain.

"Protocol documentation" https://en.bitcoin.it/wiki/Protocol_documentation

[-]

MSU Scholars Find $21T in Unauthorized Government Spending

Unauthorized federal spending (in these two departments) 1998-2015: $21T

Federal debt (2017): $20T

$ 20,000,000,000,000 USD

Would a blockchain for government expenditures help avoid this type of error?

We already now have https://usaspending.gov ( https://beta.usaspending.gov ) and expenditure line item metadata.

Would having traceable money in a distributed ledger help us keep track of money collected from taxpayers?

Obviously, the volatility of most cryptocurrencies would be disadvantageous for purposes of transferring and accounting for government spending. Isn't there a way to peg a cryptocurrency to the USD; even with Quantitative Easing? How is Quantitative Easing different from just deciding to print trillions more 'coins' in order to counter debt or inflation or deflation; why is the government in debt at all?

re: Quantitative Easing

https://en.wikipedia.org/wiki/Quantitative_easing

Say I have $100 in my Social Security Fund (in very non-aggressive investments which need to meet or exceed inflation) and the total supply of money (including paper notes and numbers in debit and credit columns of various public and private databases) the total supply of money is $1T with $1T in debt; if 1T is printed to pay for that debt, is my $100 in retirement savings then worth $50? Or is it more complex than that?

[+]
[-]

Universities spend millions on accessing results of publicly funded research

Are there good open source solutions for journal publishing? (HTML abstract, PDFs, comments, ...)?

[+]
[+]

> Ambra is being discontinued!

The article mentions the discontinuation of Aperta but nothing about Ambra?

https://plos.github.io/ambraproject/Developer-Overview.html

https://github.com/PLOS/ambra

[+]
[-]

An Interactive Introduction to Quantum Computing

Part 2 mentions two quantum algorithms that could be used to break Bitcoin (and SSH and SSL/TLS; and most modern cryptographic security systems): Shor's algorithm for factorization and Grover's search algorithm.

Part 2: http://davidbkemp.github.io/QuantumComputingArticle/part2.ht...

Shor's algorithm: https://en.wikipedia.org/wiki/Shor%27s_algorithm

Grover's algorithm: https://en.wikipedia.org/wiki/Grover%27s_algorithm

I don't know what heading I'd suggest for something about how concentration of quantum capabilities will create dangerous asymmetry. (That is why we need post-quantum ("quantum resistant") hash, signature, and encryption algorithms in the near future.)

Quantum attacks on Bitcoin, and how to protect against them (ECDSA, SHA256)

"Quantum attacks on Bitcoin, and how to protect against them (ECDSA, SHA256)" https://www.arxiv-vanity.com/papers/1710.10377/

> […] On the other hand, the elliptic curve signature scheme used by Bitcoin is much more at risk, and could be completely broken by a quantum computer as early as 2027, by the most optimistic estimates.

From https://csrc.nist.gov/Projects/Post-Quantum-Cryptography :

> NIST has initiated a process to solicit, evaluate, and standardize one or more quantum-resistant public-key cryptographic algorithms. Nominations for post-quantum candidate algorithms may now be submitted, up until the final deadline of November 30, 2017.

[-]

Project Euler

[+]
[+]
[+]
[+]

I like https://rosalind.info bioinformatics problems because:

- There are problem explanations and an accompanying textbook.

- You can structure the solutions with unit tests that test for known good values.

- There's a graph of problems.

[-]

Who’s Afraid of Bitcoin? The Futures Traders Going Short

[+]

Shark futures traders here to save the mf'in day!

[-]

Statement on Cryptocurrencies and Initial Coin Offerings

[+]
[+]

> I think people are viewing this as an attack on crypto, when its actually just common sense.

> […] The problem is these companies essentially reserve the right to disregard that contract and could then sell their company, domestically or overseas, for cash, without recompensating any token holders.

> Securities regulation and law stops that. But the tokens do need to be lawful securities in order for the court to recognize them.

This. IRS regards coins and tokens as capital gains taxable things regardless of whether they qualify as securities. SEC exists to protect investors from scams and unfair dealing. In order to protect investors, SEC regulates issuance of securities.

[-]

Ask HN: How do you stay focused while programming/working?

I often find myself "needing" to take a mini-break after just a few minutes of concerted effort while coding. In particular, this often occurs after I've made a tiny breakthrough, prompting me to reward myself by checking Twitter or HN. This bad habit quickly derails any momentum. What are some tips to increase focus stamina and avoid distraction?

[+]

> It's not exactly new and exciting, but I found that listening to calm, instrumental music helps me focus. Mostly Ambient.

Same. Lounge, Ambient, Chillout, Chillstep (https://di.fm has a bunch of great streams. SoundCloud and MixCloud have complete replayable sets, too.)

I've heard that videogame soundtracks are designed to not be distracting; to help focus.

[-]

A Hacker Writes a Children's Book

The rhymes and illustrations look great! Is there a board book edition?

Other great STEM and computers books for kids:

"A is for Array"

"Lift-the-Flap Computers and Coding"

"Computational Fairy Tales"

"Hello Ruby: Adventures in Coding"

"Python for Kids: A Playful Introduction To Programming"

"Lauren Ipsum: A Story About Computer Science and Other Improbable Things"

"Rosie Revere, Engineer"

"Ada Byron Lovelace and the Thinking Machine"

"HTML for Babies: Volume 1 of Web Design for Babies"

"What Do You Do With a Problem?"

"What Do You Do With an Idea?"

"ABCs of Mathematics", "The Pythagorean Theorem for Babies", "Non-Euclidian Geometry for Babies", "Introductory Calculus for Infants", "ABCs of Physics", "Statistical Physics for Babies", "Netwonian Physics for Babies", "Optical Physics for Babies", "General Relativity for Babies", "Quantum Physics for Babies", "Quantum Information for Babies", "Quantum Entanglement for Babies"

"ELI5": "Explain like I'm five"

Someone should really make a list of these.

Ask HN: Do ISPs have a legal obligation to not sell minors' web history anymore?

[+]

So they can currently argue that, since they don't know the age of the browser, they're not liable?

Weren't we better off with a policy making it illegal to sell web browsing history for anyone; regardless of whether their age or disability is known?

[-]

Tech luminaries call net neutrality vote an 'imminent threat'

> “The current technically-incorrect order discards decades of careful work by FCC chairs from both parties, who understood the threats that Internet access providers could pose to open markets on the Internet.”

Paid prioritization is that threat.

Again, streaming video content for all ages is not more important than online courses.

[-]

Ask HN: Can hashes be replaced with optimization problems in blockchain?

CureCoin.

From https://curecoin.net/knowledge-base/about-curecoin/what-is-c... :

> Curecoin allows owners of both ASIC and GPU/CPU hardware to earn. Curecoin puts ASICs to work at what they are good at–securing a blockchain, while it puts GPUs and CPUs to work with work items that can only be done on them–protein folding. While still having a secure blockchain, it supports, and thus is supported by, scientific research.

...

From "CureCoin Reaches #1 Ranking on Folding@home" https://www.newswire.com/news/bio-research-loves-curecoin-ga... :

> As of the afternoon of August 29, 2017 (Eastern Time), the Curecoin Team 224497 earned the world's #1 rank on Stanford's Folding@home - a protein folding simulation Distributed Computing Network (DCN). In a little over 3 years, the team (including our merge-folding partners at Foldingcoin) collectively produced 160 billion points worth of molecular computations to support research in the areas of cancer, Alzheimer's, Huntington's, Parkinson's, Infectious Disease as well as helping scientists uncover new molecular dynamics through groundbreaking computational techniques.

From https://news.ycombinator.com/item?id=15843795 :

> Gridcoin (Berkeley 2013) is built on Proof-of-Stake and Proof-of-Research. Gridcoin is used as payment for computing resources contributed to BOINC.

> I doubt that volatility would be welcome on the Gridcoin blockchain: Wikipedia lists "6.5% Inflation. 1.5% Interest + 5% Research Payments APR" under the Supply Growth infobox attribute.

> https://en.wikipedia.org/wiki/Gridcoin

[-]

Ask HN: What could we do with all the mining power of Bitcoin? Fold Protein?

Instead of buzzing SHA-512 in circles like busy bees ad infinitum, is there any way we can use these calculations productively?

Instead of algo-trading the stock markets?!

There are a number of distributed computing projects (e.g. SETI@home): https://en.wikipedia.org/wiki/List_of_distributed_computing_...

The Ethereum White Paper lists a number of applications for blockchains: https://github.com/ethereum/wiki/wiki/White-Paper

(BitCoin is built on SHA-256, Ethereum is built on Keccak-256 (~SHA-3))

Proof-of-Stake is a lower energy alternative to Proof-of-Work with tradeoffs: https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ

Unfortunately, IDK of another way to find secure consensus (blockchains are consensus protocols) in a DDOS-resistant way with unsolved problems?

> Unfortunately, IDK of another way to find secure consensus (blockchains are consensus protocols) in a DDOS-resistant way with unsolved problems?

Gridcoin (Berkeley 2013) is built on Proof-of-Stake and Proof-of-Research. Gridcoin is used as payment for computing resources contributed to BOINC.

I doubt that volatility would be welcome on the Gridcoin blockchain: Wikipedia lists "Supply growth 6.5% Inflation. 1.5% Interest + 5% Research Payments APR" under the Supply Growth infobox attribute.

https://en.wikipedia.org/wiki/Gridcoin

[-]

No CEO needed: These blockchain platforms will let ‘the crowd’ run startups

Mentioned in the article are Aragon, District0x, Ethlance, NameBazaar, Colony, DAOstack; all of which, IIUC, are built with Ethereum and Smart Contracts (DAOs).

[-]

How much energy does Bitcoin mining really use?

Is there a confidence interval chart with low, average, and high estimates? Maybe a Jupyter notebook with parametrized functions and a reproducible and reasonably reviewable analysis?

A sustainability index with voluntary data from mining pools would be great.

[-]

The Actual FCC Net Neutrality Repeal Document. TLDR: Read Pages 82-87 [pdf]

Here are some links to the relevant antitrust laws:

Sherman Antitrust Act (1890) https://en.wikipedia.org/wiki/Sherman_Antitrust_Act

Aspen Skiing Co. v. Aspen Highlands Skiing Corp. (1985) https://en.wikipedia.org/wiki/Aspen_Skiing_Co._v._Aspen_High....

Transparency in network management and paid prioritization practices and agreements will be relevant.

"We find that antitrust law, in combination with the transparency rule we adopt, is particularly well-suited to addressing any potential or actual anticompetitive harms that may arise from paid prioritization arrangements." (p.147)

If antitrust law is sufficient, as you've found, there would be no need for Title II Common Carrier regulation in any industry.

We can call phone numbers provided by any company at the same rate because phone companies are regulated as Title II Common Carriers. ISPs are also common carriers.

"Public airlines, railroads, bus lines, taxicab companies, phone companies, internet service providers,[3] cruise ships, motor carriers (i.e., canal operating companies, trucking companies), and other freight companies generally operate as common carriers."

https://en.wikipedia.org/wiki/Common_carrier

[-]

The 5 most ridiculous things the FCC says in its new net neutrality propaganda

> The Federal Communications Commission put out a final proposal last week to end net neutrality. The proposal opens the door for internet service providers to create fast and slow lanes, to block websites, and to prioritize their own content. This isn’t speculation. It’s all there in the text.

Great. Payola. Thanks Verizon!

Does the FTC have the agreement information needed to hear the anti-trust cases that are sure to result from what are now complaints to the FCC (an organization with network management expertise) being redirected to the FTC?

Title II is the appropriate policy set for ISPs; regardless of how lucrative horizontal integration with content producers seems.

[-]

FCC's Pai, addressing net neutrality rules, calls Twitter biased

No. Censoring hate speech by banning people who are verbally assaulting others (in violation of Terms of Service that they agreed to) is a very different concern than requiring common carriers to equally prioritize bits.

If we extend "you must allow people to verbally assault others (because free speech applies to the government)" to TV and radio, what do we end up with?

Note that the FCC fines non-cable TV (broadcast radio and TV) for cursing on air. See "Obscene, Indecent and Profane Broadcasts" https://www.fcc.gov/consumers/guides/obscene-indecent-and-pr...

How can you ask social media companies to do something about fake news (the vast majority of which served to elect the current administration (which nominated this FCC chairman)) while also lambasting them for upholding their commitment to providing a hate-free experience for net citizens and paying advertisers?

"Open Internet": No blocking. No throttling. No paid prioritization.

It would be easier for us to understand the "Open Internet" rules if the proposed "Restoring Internet Freedom" page wasn't crudely pasted over (redirected to from) the page describing the current Open Internet rules. www.fcc.gov/general/open-internet (current policy) now redirects to www.fcc.gov/restoring-internet-freedom (proposed policy).

ISPs blocking, throttling, or paid-prioritizing Twitter, Netflix, Fox, or CNN for everyone is a different concern than responding to individuals who are threatening others with hate speech.

The current policy ("Open Internet") means that you can use the bandwidth cap that you pay for for whatever legal content you please.

The proposed policy ("Restoring Internet Freedom") means that internet businesses will need to pay every ISP in order to not be slower than the big guys who can afford to pay-to-play (~"payola"). https://en.wikipedia.org/wiki/Payola

[-]

A curated list of Chaos Engineering resources

[+]

"Resilience Engineering" would be a good alternative term for these failure scenario simulations and analyses.

Glossary of Systems Theory > A > Adaptive capacity:

> Adaptive capacity: An important part of the resilience of systems in the face of a perturbation, helping to minimise loss of function in individual human, and collective social and biological systems

https://en.wikipedia.org/wiki/Glossary_of_systems_theory

[-]

Technology behind Bitcoin could aid science, report says

Bloom is working on non-academic credit building and scoring.

Hyperledger brings together many great projects and tools which have numerous applications in science and industry.

Is a blockchain necessary? Could we instead just sign JSONLD records with ld-signatures and store them in an eventually or strongly consistent database we all contribute resources to synchronizing and securing?

[+]
[+]
[+]
[-]

Git hash function transition plan

> Some hashes under consideration are SHA-256, SHA-512/256, SHA-256x16, K12, and BLAKE2bp-256.

[+]
[-]

Vintage Cray Supercomputer Rolls Up to Auction

The linked jacket looks pretty cool.

"Vintage Nylon Cray Super Computer Coat Medium, Cray Y-MP C90 Chippewa Falls"

[-]

Vanguard Founder Jack Bogle Says ‘Avoid Bitcoin Like the Plague’

Over the past 7 years, Bitcoin has outperformed every security and portfolio that Jack Bogle has recommended.

[+]

Bitcoin has been a bubble since $1 and $100 to these people.

[+]
[+]
[+]

I think they just grew more tulips to meet demand?

https://en.wikipedia.org/wiki/Tulip_mania

[-]

Nasdaq Plans to Introduce Bitcoin Futures

[+]

> One way Nasdaq seeks to differentiate itself seems to be in the amount of data it uses for pricing the digital currency contracts. VanEck Associates Corp., which recently withdrew plans for a bitcoin exchange-traded fund, will supply the data used to price the contracts, pulling figures from more than 50 sources, according to the person. That appears to exceed CME’s plan to use four sources, and Cboe’s one. Nasdaq’s contracts will be cleared by Options Clearing Corp., the person said.

BitMEX bitcoin futures are already online. IDK how many price sources they pull?

Aren't there a few other companies already selling Bitcoin futures?

[+]
[+]
[+]
[+]

> Or, large investment banking houses will step in and create naked shorting opportunities to inflate sell pressure creating 'death spirals' to drive prices down and scoop them up and extreme discounts. This happens in the traditional public markets everyday.

Is there a term for this?

[+]
[-]

Ask HN: Where do you think Bitcoin will be by 2020?

I have a friend who believes it will be $100,000 per BitCoin and his reasoning is 'supply and demand'.

There will be around 18M bitcoins in 2020. [1][2]

[1] https://en.bitcoin.it/wiki/Controlled_supply

[2] https://bashco.github.io/

This paper [3] suggests we'll be needing to upgrade to quantum-secure hash functions instead of ECDSA before 2027.

[3] "Quantum attacks on Bitcoin, and how to protect against them" https://arxiv.org/abs/1710.10377

Hopefully, Ethereum will have figured out a Proof of Stake [4] solution for distributed consensus which is as resistant to DDOS as Proof of Work; but with less energy consumption (thereby, unfortunately or fortunately, un-incentivizing clean energy as a primary business goal).

[4] https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQ

Ask HN: Why would anyone share trading algorithms and compare by performance?

I was speaking with a person years my senior awhile back, and sharing information about the Quantopian platform (which allows users to backtest and share trading algorithms); and he asked me "why would anyone share their trading algorithms [if they're making any money]?"

I tried "to help each other improve their performance". Is there a better way to explain to someone who spends their time reading forums with no objective performance comparisons over historical data why people would help each other improve their algorithmic trading algorithms?

Catalyst, like Quantopian, is also built on top of Zipline; but for cryptocurrencies. https://enigmampc.github.io/catalyst/example-algos.html

Zipline (backtesting and live trading of algorithms with initialize(context) and handle_data(context, data) functions; with the SPY S&P 500 ETF as a benchmark) https://github.com/quantopian/zipline

Pyfolio (for objectively comparing the performance of trading strategies over time) https://github.com/quantopian/pyfolio

...

"Community Algorithms Migrated to Quantopian 2" https://www.quantopian.com/posts/community-algorithms-migrat...

- "Reply to minimum variance w/ contrast" seems to far outperform the S&P 500.

[-]

Ask HN: CS papers for software architecture and design?

Can you please point me to some papers that you consider very influential for your work or that you believe they played significant role on how we structure our software nowdays?

"The Architecture of Open Source Applications" Volumes I & II http://aosabook.org/en/

"Manifesto for Agile Software Development" https://en.wikipedia.org/wiki/Agile_software_development#The...

"Catalog of Patterns of Enterprise Application Architecture" https://martinfowler.com/eaaCatalog/

Fowler > Publications ("Refactoring ",) https://en.wikipedia.org/wiki/Martin_Fowler#Publications

"Design Patterns: Elements of Reusable Object-Oriented Software" (GoF book) https://en.wikipedia.org/wiki/Design_Patterns

.

UNIX Philosophy https://en.wikipedia.org/wiki/Unix_philosophy

Plan 9 https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs

## Distributed Systems

CORBA > Problems and Criticism (monolithic standards, oversimplification,): https://en.wikipedia.org/wiki/Common_Object_Request_Broker_A...

Bulk Synchronous Parallel: https://en.wikipedia.org/wiki/Bulk_synchronous_parallel

Paxos: https://en.wikipedia.org/wiki/Paxos_(computer_science)

Raft: https://en.wikipedia.org/wiki/Raft_(computer_science) #Safety

CAP theorem: https://en.wikipedia.org/wiki/CAP_theorem

[-]

Keeping a Lab Notebook [pdf]

[+]

These are ASCII-sortable:

0001_Introduction.ipynb

0010_Chapter-1.ipynb

ISO8601 w/ UTC is also ASCII sortable.

# Jupyter notebooks as lab notebooks

## Disadvantages

### Mutability

With a lab notebook, you can cross things out but they're still there.

- [ ] ENH: Copy cell and mark as don't execute (or wrap with ```language\n``` and change the cell type to markdown)

- [ ] ENH: add a 'Save and {git,} Commit' shortcut

CoCalc (was: SageMathCloud) has (somewhat?) complete notebook replay with a time slider; and multi-user collaborative editing. ("Time-travel is a detailed history of all your edits and everything is backed up in consistent snapshots.")

### Timestamps

You must add timestamps by hand; i.e. as #comments or markdown cells.

- [ ] ENH: add a markdown cell with a timestamp (from a configurable template) (with a keyboard shortcut)

### Project files

You must manage the non-.ipynb sources separately. (You can create a new file or folder. You can just drag and drop to upload. You can open a shell tab to `git status diff commit` and `git push`, if the Jupyter/JupyterHub/CoCalc instance has network access to e.g. GitLab or GitHub)

## Advantages

### Reproducibility Executable I/O cells

The version_information and/or watermark extensions will inline the software versions that were installed when the notebook was last run

Dockerfile for OS config

Conda environment.yml (and/or pip requirements.txt and/or pipenv Pipfile) for further software dependencies

BinderHub can rebuild a docker image on receipt of a webhook from a got repo, push the built image to a docker image repository, and then host prepared Jupyter instances (with Kubernetes) which contain (and reproducibly archive) all of the preinstalled prerequisites.

Diff: `git diff`, `nbdime`

### Publishing

You can generate static HTML, HTML slides with RevealJS, interactive HTML slides with RISE, executable source with comments (e.g. a .py file), LaTeX, and PDF with 'Save as' or `jupyter-convert --to`. You can also create slides with nbpresent.

MyBinder.org and Azure Notebooks have badges for e.g. a README.md or README.rst which launch a project executably in a docker instance hosted in a cloud. CoCalc and Anaconda Cloud also provide hosted Jupyter Notebook projects.

You can template a gradable notebook with nbgrader.

GitHub renders .ipynb notebooks as HTML. Nbviewer renders .ipynb notebooks as HTML.

There are more than 90 Jupyter Kernels for languages other than Python.

https://github.com/quobit/awesome-python-in-education#jupyte...

[-]

How to teach technical concepts with cartoons

There's not a Wikipedia page for "visual metaphor", but there are pages for "visual rhetoric" https://en.wikipedia.org/wiki/Visual_rhetoric and "visual thinking" https://en.wikipedia.org/wiki/Visual_thinking

Negative space can be both meaningful and useful later on.

I learned about visual thinking and visual metaphor in application to business communications from "The Back of the Napkin: Solving Problems and Selling Ideas with Pictures" http://www.danroam.com/the-back-of-the-napkin/

[-]

Fact Checks

Indeed, fact checking systems are only as good as the link between identity credentialing services and a person.

http://schema.org/ClaimReview (as mentioned in this article) is a good start.

A few other approaches to be aware of:

"Reality Check is a crowd-sourced on-chain smart contract oracle system" [built on the Ethereum smart contracts and blockchain]. https://realitykeys.github.io/realitycheck/docs/html/

And standards-based approaches are not far behind:

W3C Credentials Community Group https://w3c-ccg.github.io/

W3C Verifiable Claims Working Group https://www.w3.org/2017/vc/WG/

W3C Verifiable News https://github.com/w3c-ccg/verifiable-news

In terms of verifying (or validating) subjective opinions, correlational observations, and inferences of causal relations; #LinkedMetaAnalyses of documents (notebooks) containing structured links to their data as premises would be ideal. Unfortunately, PDF is not very helpful in accomplishing that objective (in addition to being a terrible format for review with screen reader and mobile devices): I think HTML with RDFa (and/or CSVW JSONLD) is our best hope of making at least partially automated verification of meta analyses a reality.

[-]

DHS orders agencies to adopt DMARC email security

From https://www.cyberscoop.com/dhs-dmarc-mandate/ :

> By Jan. 2018, all federal agencies will be required to implement DMARC across all government email domains.

> Additionally, by Feb. 2018, those same agencies will have to employ Hypertext Transfer Protocol Secure (HTTPS) for all .gov websites, which ensures enhanced website certifications.

Requiring TLS (and showing an unlocked icon for non-TLS-secured emails) would also be good.

[-]

The electricity for 1BTC trade could power a house for a month

The article seems to imply that a 1BTC transaction requires 200kWh of energy.

First, what is the source for that number?

Second, what is the business interest of the quoted individual? Are they promoting competing services?

Third, how much energy does the supposed alternative really take, by comparison?

How much energy do these aspects of said business operations require:

- Travel to and from the office for n employees

- Dry cleaning for n employees' work clothes

- Lights for an office of how many square feet

- Fraud investigations in hours worked, postal costs, wait times, CPU time and bandwidth to try and fix data silos' ledgers' transaction ids and time skew; with a full table JOIN on data nobody can only have for a little while from over here and over there

- Desktop machines' idle hours

- Server machines' idle hours

With low cost clean energy, these businesses are profitable; with a very different cost structure than traditional banking and trading.

Anyone want to guess how much the quoted concerned party has invested in cryptocoins / cryptocurrencies? Guy's prolly just sitting at home, shorting it, just waiting for the price to move.

By comparison, with an ICO, there's less back-and-forth on the cap table.

"My job is to feed the machines."

PAC Fundraising with Ethereum Contracts?

I'll cc this here with formatting changes (extra \n and ---) for Hacker News:

---

### Background

- PAC: Political Action Committee https://en.wikipedia.org/wiki/Political_action_committee

- https://github.com/holographicio/awesome-token-sale

### Questions

- Is Civic PAC fundraising similar to e.g. a Crowdsale or a CappedCrowdsale or something else entirely, in terms of ERC20 OpenZeppelin solidity contracts?

- Would it be worth maintaining an additional contract for [PAC] "fundraising" with terminology that campaigns can understand; or a terminology map?

- Compared to just accepting donations at a wallet address, or just accepting credit/debt card donations, what are the risks of a token sale for a PAC?

--- Is there any way to check for donors' citizenship? (When/Where is it necessary to check donors' citizenship (with credit/debit cards or cryptocoins/cryptotokens?))

- Compared to just accepting donations at a wallet address, or just accepting credit/debt card donations, what are the costs of a token sale for a PAC?

--- How much gas would such a contract require?

- Compared to just accepting donations at a wallet address, or just accepting credit/debt card donations, what are the benefits of a token sale for a PAC?

---- Lower transaction fees than credit/debit cards?

---- Time limit (practicality, marketing)

---- Cap ("we only need this much")

---- Refunds in the event of […]

### Objectives

- Comply with all local campaign finance laws

--- Collect citizenship information for a Person

--- Collect citizenship information for an Organization 'person'

- Ensure that donations hold value

- Raise funds

- Raise funds up to a cap

- (Optionally?) collect names and contact information ( https://schema.org/Person https://schema.org/Organization )

- Optionally refund if the cap is not met

- Optionally change the cap midstream

- Optionally cancel for a specified string and/or URL reason

[-]

Here’s what you can do to protect yourself from the KRACK WiFi vulnerability

> But first, let’s clarify what an attacker can and cannot do using the KRACK vulnerability. The attacker can intercept some of the traffic between your device and your router. Attackers can’t obtain your Wi-Fi password using this vulnerability. They can just look at your traffic. It’s like sharing the same WiFi network in a coffee shop or airport.

From reading the articles:

https://www.krackattacks.com/

( https://github.com/vanhoefm/krackattacks ; which is watch-able )

> Against these encryption protocols, nonce reuse enables an adversary to not only decrypt, but also to forge and inject packets.

https://www.kb.cert.org/vuls/id/228519

> Key reuse facilitates arbitrary packet decryption and injection, TCP connection hijacking, HTTP content injection, or the replay of unicast, broadcast, and multicast frames.

[-]

Using the Web Audio API to Make a Modem

While we're talking about Air Gaps, it's probably worth mentioning GSMem (an {x86,} internal bus as a GSM cellular transceiver (modem)); from Wikipedia:

https://en.wikipedia.org/wiki/Air_gap_malware

[-]

Ask HN: How to introduce someone to programming concepts during 12-hour drive?

I won't go into details to keep this brief, but I'm going to spend a week with this client of mine's kit, and I'm supposed to teach him enough about programming for him to figure out if it's something he might be interested in pursuing.

He's about 20, and still struggling to finish high school, but he's smart (although perhaps a little weird).

I thought about introducing him to touch typing just to get a useful skill out of this regardless of the outcome. Then, I thought that during this week I'd teach him HTML and enough CSS to see what's used for. I'm thinking that if he gets excited about typing code and seeing things happening he'll want to study more and learn more advanced stuff in the future and perhaps even make it his profession (this is what my client hopes will happen).

Now, part of this trip is a 12-hour drive. I thought I could use this time to introduce him to simple programming concepts. For instance, if asked to list all steps involved in starting a car, most people would say:

- turn key - start car

That could turn into an infinite loop, though. A better way would be:

- turn key - start car - if it starts, exit - if it doesn't start, repeat 3 more times - if it still won't start, call a mechanic

Stuff like this—that anyone can understand, that can be explained without looking at a computer, but that it's still useful.

Any idea what I could talk about? Examples, anecdotes, anything.

Computational Thinking:

https://en.wikipedia.org/wiki/Computational_thinking

> 1. Problem formulation (abstraction);

> 2. Solution expression (automation);

> 3. Solution execution and evaluation (analyses).

This is a good skills matrix to start with:

http://sijinjoseph.com/programmer-competency-matrix/

https://competency-checklist.appspot.com

"Think Python: How to Think Like a Computer Scientist"

http://www.greenteapress.com/thinkpython/html/index.html

K12CS Framework is good for all ages:

https://k12cs.org

For syntax, learnxinmyminutes:

https://learnxinyminutes.com/docs/python3/

https://learnxinyminutes.com/docs/javascript/

[+]

To get a job, "Coding Interview University":

https://github.com/jwasham/coding-interview-university

[+]
[+]
[+]

You can learn about a person's internal representation by asking Clean Questions and listening to the metaphors that they share; in order to avoid transferring and inferring your own biased internal representation (MAPS: metaphors, assumptions, paradigms or sensations).

It's worth reading this whole article (and e.g. "Clean Language: Revealing Metaphors and Opening Minds")

https://en.wikipedia.org/wiki/Clean_Language

"Metaphors We Live By" explains conceptual metaphor ("internal representation" w/ Clean Language / Symbolic Modeling) and lists quite a few examples: https://en.wikipedia.org/wiki/Conceptual_metaphor

Our human brains tend to infer Given, When, Then "rules" which we only later reason about in terms of causal relations: https://en.wikipedia.org/wiki/Given-When-Then

It's generally accepted that software is more correct when we start with tests:

Given : When : Then :: Precondition : Command : Postcondition https://wrdrd.github.io/docs/consulting/software-development...

... "Criteria for Success and Test-Driven-Development" https://westurner.github.io/2016/10/18/criteria-for-success-...

I believe it was Feynman who introduced the analogy:

desktop : filing cabinet :: RAM : hard drive

Here's a video: "Richard Feynman Computer Heuristics Lecture" (1985) https://youtu.be/EKWGGDXe5MA

Somewhere in my comments here, I talk about topologically sorting CS concepts; in what little time I spent, I think I suggested "Constructor Theory" (Deutsch 201?) as a first physical principle. https://en.wikipedia.org/wiki/Constructor_theory

> Constructor Theory

https://en.wikipedia.org/wiki/Constructor_theory#Outline

Task, Constructor, Computation Set, Computation Medium, Information Medium, Super information Medium (quantum states)

The filing cabinet and disk storage are information mediums / media.

How is the desktop / filling cabinet metaphor mismatched or limiting?

There may be multiple desktops (RAM/Cache/CPU; Computation mediums): is the problem parallelizable?

Consider a resource scheduling problem: there are multiple rooms, multiple projectors, and multiple speakers. Rooms and projectors cost so much. Presenters could use all of an allotted period of time; or they could take more or less time. Some presentations are logically sequence able (SHOULD/MUST be topologically sorted). Some presentations have a limited amount of time for questions afterward.

Solution: put talks online with an infinite or limited amount of time for asynchronous questions/comments

Solution: in between attending a presentation, also research and share information online (concurrent / asynchronous)

And, like a hash map, make the lookup time for a given resource with a type(s) ~O(1) with URLs (URIs) that don't change. (Big-O notation for computational complexity)

Resource scheduling (SLURM,): https://news.ycombinator.com/item?id=15267146

[-]

American Red Cross Asks for Ham Radio Operators for Puerto Rico Relief Effort

kw71 | 2017-09-27 01:24:13 | 346 | # | ^

Zello trended up during hurricane Harvey:

http://zello.com/

> Push the button for instant, radio-style talk on any Wi-Fi or data plan.

> Access public and private channels.

> Choose button for push-to-talk.

> [...] available for Android, BlackBerry, iPhone, Windows PC and Windows Phone 8

...

> Connects to existing LMR radio systems

> All Radio Technologies

> Interconnect conventional and trunked analog FM, ETSI DMR, ETSI TETRA, MotoTRBO, APCO P25 FDMA, and NXDN.

> https://zellowork.com/lmr

They probably need some batteries, turbines, and solar cell chargers to get WiFi online?

> This phone needs no battery

http://www.techradar.com/news/this-phone-needs-no-battery

> [...] “We’ve built what we believe is the first functioning cellphone that consumes almost zero power,” said Shyam Gollakota, an associate professor in the Paul G. Allen School of Computer Science & Engineering at the UW and co-author on a paper describing the technology.

> Instead, the phone pulls power from its environment - either from ambient radio signals harvested by an antenna, or ambient light collected by a solar cell the size of a grain of rice. The device consumes just 3.5 microwatts of power during use.

> [...] “And if every house has a Wi-Fi router in it, you could get battery-free cellphone coverage everywhere."

(Also trending on HackerNews right now: https://news.ycombinator.com/item?id=15350799 )

[+]

A low energy phone (and WiFi (from a related UW R&D team?)) would be extremely useful in this and future disaster relief scenarios. Furthermore, radio operators who care about the Red Cross may be able to help pull this product through to market.

Probably also worth mentioning Shelterpods and Responsepods for disaster relief deployments to this crowd; they're designed to take a lot of wind and rain:

https://store.advancedsheltersystemsinc.com/?___store=shelte...

https://store.advancedsheltersystemsinc.com/responsepod/vip/...

There's also the Nearby Connections API (for Android only at this point AFAIU) which'll use any radio chips on a device.

https://developers.google.com/nearby/connections/overview

A button on routers for emergency adhoc mode would be super useful?

[-]

Django 2.0 alpha

orf | 2017-09-23 12:12:36 | 156 | # | ^
[+]
[+]
[+]

Does it support negative long integers?

EDIT: I am without actual internet or mobile tethering and an unable to `git clone https://github.com/django/django -b stable/2.0.x` and check out this convenient new feature.

[+]
[-]

Ask HN: What is the best way to spend my time as a 17-year-old who can code?

I'm 17 and I can code at a relatively high level. I'm not really sure what I should be doing. I would like to make some money, but is it more useful to me to contribute to open-source software to add to my portfolio or to find people who will hire me? Even most internships require you to be enrolled as a CS major at a college. I've also tried things like Upwork, but generally people aren't willing to hire a 17-year-old and the pay is very bad. Thanks for any advice!

My GitHub is: https://github.com/meyer9

Pick a #GlobalGoal or three that you find interesting and want to help solve.

Apply Computational Thinking to solving a given problem. Break it down into completeable tasks.

You can work on multiple canvasses at once: sometimes it's helpful to let things simmer on the back burner while you're taking care of business. Just don't spread yourself too thin: everyone deserves your time.

Remember ERG theory (and Maslow's Hierarchy). Health and food and shelter are obviously important.

Keep lists of good ideas. Notecards, git, a nice fresh blank sheet of paper for the #someday folder. What to call it isn't important yet. "Thing1" and "Thing2".

You can spend time developing a portfolio, building up your skills, and continuing education. You can also solve a problem now.

You don't need a co-founder at first. You do need to plan to be part of a team: other people are good at other things; and that's the part they most enjoy doing.

[-]

Ask HN: Any detailed explanation of computer science

Any detailed easily understandable explanation of computer science from bottom-up like Feynman's lectures explanation of physics.

Bits

Boolean algebra

Boolean logic gates / (set theory)

CPU / cache

Memory / storage

Data types (signed integers, floats, decimals, strings), encoding

...

A bottom-up (topologically sorted) computer science curriculum (a depth-first traversal of a Thing graph) ontology would be a great teaching resource.

One could start with e.g. "Outline of Computer Science", add concept dependency edges, and then topologically (and alphabetically or chronologically) sort.

https://en.wikipedia.org/wiki/Outline_of_computer_science

There are many potential starting points and traversals toward specialization for such a curriculum graph of schema:Things/skos:Concepts with URIs.

How to handle classical computation as a "collapsed" subset of quantum computation? Maybe Constructor Theory?

https://en.wikipedia.org/wiki/Constructor_theory

From "Resources to get better at theoretical CS?" https://news.ycombinator.com/item?id=15281776 :

- "Open Source Society University: Path to a self-taught education in Computer Science!" https://github.com/ossu/computer-science

This is also great:

- "Coding Interview University" https://github.com/jwasham/coding-interview-university

Neither these nor the ACM Curriculum are specifically topologically sorted.

[-]

Ask HN: What algorithms should I research to code a conference scheduling app

I'm interested in writing a utility to assist with scheduling un-conferences. Lets take the following situation for an example:

* 4 conference rooms across 4 time slots, for a total of 16 talks.

* 30 proposed talks

* 60 total participants

Each user would be given 4(?)votes, un-ranked. (collection of the votes is a separate topic) Voting is not secret, and we don't need mathematically precise results. The goal is just to minimize conflicts.

The algorithm would have the following data to work with:

* List of talks with the following properties:

     * presenter participant ID

     * the participant ID for each user that voted for the talk
I'd like to come up with an algorithm that does the following:

* fills all time slots with the highest voted topics

* attempts to avoid overlapping votes for any particular given user in a given time slot

* attempt to not schedule a presenter's talk during a talk they are interested in.

* Sugar on top: implement ranked preferences

My question: where do I start to research the algorithms that will be helpful? I know this is a huge project, but I have a year to work on it. I'm also not overly concerned with performance, but would like to keep it from being exponential.

Thank you for any references you can provide!

[-]

What have been the greatest intellectual achievements?

- The internet (TCP/IP) and world wide web (HTML, HTTP).

History of the Internet:

https://en.wikipedia.org/wiki/History_of_the_Internet

History of the World Wide Web:

https://en.wikipedia.org/wiki/History_of_the_World_Wide_Web

- Relational algebra, databases, Linked Data (RDF,).

Relational algebra:

https://en.wikipedia.org/wiki/Relational_algebra

Relational database:

https://en.wikipedia.org/wiki/Relational_database

Linked Data:

https://en.wikipedia.org/wiki/Linked_data

RDF:

https://en.wikipedia.org/wiki/Resource_Description_Framework

The UNDHR (UN Declaration of Human Rights): [Equality,]

http://www.un.org/en/universal-declaration-human-rights/

- Time, Calendars

Time > History of the calendar: https://en.wikipedia.org/wiki/Time#History_of_the_calendar

- Standard units of measure (QUDT URIs)

https://en.wikipedia.org/wiki/Units_of_measurement

[+]
[-]

Ask HN: What can't you do in Excel? (2017)

Was just Googling around for whether Excel (sans VBA scripting of course) is Turing-complete, in order to decide whether telling a layperson that Excel (or spreadsheeting in general) can be considered very much like programming. Came across this 2009 HN thread, "Ask HN: What can't you do in Excel?" from pg:

> One of the startups in the current YC cycle is making a new, more powerful spreadsheet. If there are any Excel power users here, could you please describe anything you'd like to be able to do that you can't currently? Your reward could be to have some very smart programmers working to solve your problem.

https://news.ycombinator.com/item?id=429477

What significant advances -- in Excel/spreadsheets, not the Turing-complete thing -- have been made in the 8 years since? What's the YC startup from that cycle that "is making a new, more powerful spreadsheet", and what is it doing today? I remember Grid [0], but that was from 2012. Any other companies make innovations that would overturn the spreadsheet paradigm, or at least be copied by Excel/OO/GSheets?

A commenter mentioned "Queries", since many spreadsheet users use spreadsheets like a database. I just recently noticed that GSheets has a QUERY function [1] that uses "principles of Structured Query Language (SQL) to do searches). The function has been around since 2015 (according to Internet Archive [2]) so perhaps I ignored it because its description then was simply, "Runs a Google Visualization API Query Language query across data."

It appears that "Visualization API Query Language" has a lot of SQL-type features with the immediately obvious exception of joins [3].

edit: Multiple people said they would like Excel to have online functionality, i.e. like Google Sheets, but being able to accept VBA and any other features of legacy Excel spreadsheets. There's now Excel Online but I haven't used it (still sticking to Office 2011 for Mac if I ever need to use Excel instead of GS). How seamless is the transition from offline, legacy Excel files to online Excel?

[0] http://blog.ycombinator.com/grid-yc-s12-reinvents-the-spreadsheet-for-the/

[1] https://support.google.com/docs/answer/3093343?hl=en

[2] http://web.archive.org/web/20150319144449/https://support.google.com/docs/answer/3093343?hl=en

[3] https://developers.google.com/chart/interactive/docs/querylanguage

[+]
[+]

W3C RDF Data Cubes (qb:)

https://wrdrd.github.io/docs/consulting/knowledge-engineerin...

> RDF Data Cubes vocabulary is an RDF standard vocabulary for expressing linked multi-dimensional statistical data and aggregations.

> Data Cubes have dimensions, attributes, and measures

> Pivot tables and crosstabulations can be expressed with RDF Data Cubes vocabulary

And then SDMX is widely used internationally:

https://github.com/pandas-dev/pandas/issues/3402#issuecommen...

Linked Data.

> [...] 7 metadata header rows (column label, property URI path, DataType, unit, accuracy, precision, significant figures)

https://wrdrd.github.io/docs/consulting/linkedreproducibilit...

Specifically, CSVW JSONLD as a lossless output format.

CSVW supports physical units.

https://twitter.com/westurner/status/901990866704900096

> "Model for Tabular Data and Metadata on the Web" (#JSONLD, #RDFa HTML) is for Data on the Web #dwbp #linkeddata https://www.w3.org/TR/tabular-data-model/

> #CSVW defaults to xsd:string if unspecified. "How do you support units of measure?" #qudt https://www.w3.org/TR/tabular-data-primer/#units-of-measure

[-]

Ask HN: How do you, as a developer, set measurable and actionable goals?

I see a lot of people from other industries, say designers or sales people, who can set for themselves actionable and measurable goals such as "Make one illustration a day", "Make a logo a day" or "Sell X units of Y product a day", "Make X ammount of dollars seeling product Z by date X", etc.

How do you, as a developer, set measurable goals for yourself, being it at work or in your side hobbie?

[+]

Burn down chart (each story has complexity points; making it possible to estimate velocity and sprint deadlines):

https://en.wikipedia.org/wiki/Burn_down_chart

User stories in a "story map" (Kanban board) with labels and/or milestones for epics, flights, themes:

https://en.wikipedia.org/wiki/User_story#Story_map

Software Development > Requirements Management > Agile Modeling > User Story: https://wrdrd.github.io/docs/consulting/software-development...

[-]

Bitcoin Energy Consumption Index

... Speaking of environmental externalities,

In the US, "Class C" fire extinguishers work on electrical fires:

From Fire_class#Electrical:

https://en.wikipedia.org/wiki/Fire_class#Electrical

> Carbon dioxide CO2, NOVEC 1230, FM-200 and dry chemical powder extinguishers such as PKP and even baking soda are especially suited to extinguishing this sort of fire. PKP should be a last resort solution to extinguishing the fire due to its corrosive tendencies. Once electricity is shut off to the equipment involved, it will generally become an ordinary combustible fire.

> In Europe, "electrical fires" are no longer recognized as a separate class of fire as electricity itself cannot burn. The items around the electrical sources may burn. By turning the electrical source off, the fire can be fought by one of the other class of fire extinguishers [citation needed].

How does this compare to carbon-intensive resource extraction operations like gold mining?

(Gold is industrially and medically useful, IIUC)

See also:

"So, clean energy incentives" https://news.ycombinator.com/item?id=15070430

[-]

Dancing can reverse the signs of aging in the brain

"Dancing or Fitness Sport? The Effects of Two Training Programs on Hippocampal Plasticity and Balance Abilities in Healthy Seniors"

Front. Hum. Neurosci., 15 June 2017 | https://doi.org/10.3389/fnhum.2017.00305

Adult neurogenesis:

https://en.wikipedia.org/wiki/Adult_neurogenesis

IIUC:

{Omega 3/6, Cardiovascular exercise,} -> Endocannabinoids -> [Hippocampal,] neurogenesis

"Neurobiological effects of physical exercise" (Hippocampal plasticity, neurogenesis,)

https://en.wikipedia.org/wiki/Neurobiological_effects_of_phy...

"Study: Omega-3 fatty acids fight inflammation via cannabinoids" https://news.illinois.edu/blog/view/6367/532158 (Omega 6: Omega 3 ratio)

scholar.google q=cannabinoid+neurogenesis https://scholar.google.com/scholar?q=cannabinoid+neurogenesi...

Functions of the ECS (Endocannabinoid System):

https://en.wikipedia.org/wiki/Endocannabinoid_system#Functio...

- #Role-in-hippocampal-neurogenesis, "runners high"

[-]

Rumours swell over new kind of gravitational-wave sighting

[+]
[+]
[-]

New Discovery Simplifies Quantum Physics

"amplituhedron"

> A team of physicists have released a paper showing their discovery of a jewel-like geometric structure that takes equations, which can be thousands of terms long, and simplifies them into a single term.

[-]

OpenAI has developed new baseline tool for improving deep reinforcement learning

https://blog.openai.com/openai-baselines-dqn/ (May 2017)

Deep Learning RL (Reinforcement Learning) algos in this batch of OpenAI RL baselines: DQN, Double Q Learning, Prioritized Replay, Dueling DQN

Src: https://github.com/openai/baselines

[+]

https://blog.openai.com/baselines-acktr-a2c/ (August 2017)

ACKTR & A2C (~=A3C)

(The GitHub readme lists: A2C, ACKTR, DDPG, DQN, PPO, TRPO)

... openai/baselines/commits/master: https://github.com/openai/baselines/commits/master

[-]

The prior can generally only be understood in the context of the likelihood

Bayes assumes/requires conditional independence of observations; which is sometimes the case.

For example:

- Are the positions of the Earth and the Moon conditionally independent? No.

- In the phrase "the dog and the cat", are "and" and "the" independent? No.

- In a biological system, are we to assume conditional independence? We should not.

https://en.wikipedia.org/wiki/Conditional_independence

...

"Efficient test for nonlinear dependence of two continuous variables" https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4539721/

- In no particular sequence: CANOVA, ANOVA, Pearson, Spearman, Kendall, MIC, Hoeffding

From https://plato.stanford.edu/entries/logic-inductive/ :

> It is now generally held that the core idea of Bayesian logicism is fatally flawed—that syntactic logical structure cannot be the sole determiner of the degree to which premises inductively support conclusions. A crucial facet of the problem faced by Bayesian logicism involves how the logic is supposed to apply to scientific contexts where the conclusion sentence is some hypothesis or theory, and the premises are evidence claims. The difficulty is that in any probabilistic logic that satisfies the usual axioms for probabilities, the inductive support for a hypothesis must depend in part on its prior probability. This prior probability represents how plausible the hypothesis is supposed to be based on considerations other than the observational and experimental evidence (e.g., perhaps due to relevant plausibility arguments). A Bayesian logicist must tell us how to assign values to these pre-evidential prior probabilities of hypotheses, for each of the hypotheses or theories under consideration. Furthermore, this kind of Bayesian logicist must determine these prior probability values in a way that relies only on the syntactic logical structure of these hypotheses, perhaps based on some measure of their syntactic simplicities. There are severe technical problems with getting this idea to work. Moreover, various kinds of examples seem to show that such an approach must assign intuitively quite unreasonable prior probabilities to hypotheses in specific cases (see the footnote cited near the end of section 3.2 for details). Furthermore, for this idea to apply to the evidential support of real scientific theories, scientists would have to formalize theories in a way that makes their relevant syntactic structures apparent, and then evaluate theories solely on that syntactic basis (together with their syntactic relationships to evidence statements). Are we to evaluate alternative theories of gravitation (and alternative quantum theories) this way?

[+]

Bayesian logicism is the logic derived from Bayesian probability.

Magic numbers are an anti-pattern: which constants are what and why should be justified OR it should be shown that a non-expert-biased form converges regardless.

[+]

Arbitrary priors are magic numbers.

Is there a frequentist statistic that can be used in a deterministic function to determine which arbitrary priors to use?

What does Bayes say when we swap A and B?

Ask HN: How to find/compare trading algorithms with Quantopian?

I found this, which links to a number of quantitative trading algorithms that significantly outperform as compared with SPY (an S&P 500 ETF):

"Community Algorithms Migrated to Quantopian 2"

https://www.quantopian.com/posts/community-algorithms-migrat...

Why even build a business, create jobs, and solve the world's problems?

... "Impact investing"

https://en.wikipedia.org/wiki/Impact_investing

"Is this a good way to invest in solving for the #GlobalGoals for Sustainable Development ( https://GlobalGoals.org )?"

[-]

MS: Bitcoin mining uses as much electricity as 1M US homes

So, clean energy incentives.

> That means 1.2% of the Sahara desert is sufficient to cover all of the energy needs of the world in solar energy.

https://www.forbes.com/sites/quora/2016/09/22/we-could-power...

Nearly all other animals on the planet survive entirely on solar energy.

https://en.wikipedia.org/wiki/Solar_energy

[-]

Ask HN: What are your favorite entrepreneurship resources

Hey everyone, I'm teaching an undergraduate class in the fall at a local university here in Miami (FIU) and would love your recommendations on what books or articles or frameworks you think the students should read. My goal for the class is to teach them how to identify problems and prototype solutions for those problems. Hopefully, they make some money from them to help pay for books, etc.

I put these notes together:

Entrepreneurship: https://wrdrd.github.io/docs/consulting/entrepreneurship

- #plan-for-failure

- #plan-for-success

Investing > Capitalization Table: https://wrdrd.github.io/docs/consulting/investing#capitaliza...

- I'll add something about Initial Coin Offerings (which are now legal in at least Delaware).

AngelList ( https://angel.co for VC jobs and funding ) asks "What's the most useful business-related book you've ever read?" ... Getting Things Done (David Allen), 43Folders = 12 months + 31 days (Merlin Mann), The Art of the Start (Guy Kawasaki), The Personal MBA (Josh Kaufman)

Lever ( https://www.lever.co ) makes recruiting and hiring (some parts of HR) really easy.

LinkedIn ( https://www.linkedin.com ) also has a large selection of qualified talent: https://smallbusiness.linkedin.com/hiring

... How much can you tell about a candidate from what they decide to write on themselves on the internet?

USA Small Business Administration: "10 steps to start your business." https://www.sba.gov/starting-business/how-start-business/10-...

"Startup Incorporation Checklist: How to bootstrap a Delaware C-corp (or S-corp) with employee(s) in California" https://github.com/leonar15/startup-checklist

Jupyter Notebook (was: IPython Notebook) notebooks are diff'able and executable. Spreadsheets can be hard to review. https://github.com/jupyter/notebook

It's now installable with one conda command: ``conda install -y notebook pandas qgrid``

FounderKit has reviews for Products, Services, and Software for founders:

https://founderkit.com

[-]

CPU Utilization is Wrong

dmit | 2017-05-09 12:59:38 | 624 | # | ^

Instructions per cycle: https://en.wikipedia.org/wiki/Instructions_per_cycle

What does IPC tell me about where my code could/should be async so that it's not stalled waiting for IO? Is combined IO rate a useful metric for this?

There's an interesting "Cost per GFLOPs" table here: https://en.wikipedia.org/wiki/FLOPS

Btw these are great, thanks: http://www.brendangregg.com/linuxperf.html

( I still couldn't fill this out if I tried: http://www.brendangregg.com/blog/2014-08-23/linux-perf-tools... )

[+]

Oh, is this because of context switching for resource staring?

[-]

Ask HN: Can I use convolutional neural networks to clasify videos on a CPU

Is there any way that I can use conv nets to classify videos on a CPU. I do not have GPUs but I want to classify videos.

There's a table with runtime comparisons for a convnet here: https://github.com/ryanjay0/miles-deep/ (GPU CuDNN: 15s, GPU: 19s, CPU: 159s)

(Also written w/ Caffe: https://github.com/yahoo/open_nsfw)

[-]

Esoteric programming paradigms

Re: "Dependent Types"

In Python, PyContracts supports runtime type-checking and value constraints/assertions (as @contract decorators, annotations, and docstrings).

https://andreacensi.github.io/contracts/

Unfortunately, there's yet no unifying syntax between PyContracts and the newer python type annotations which MyPy checks at compile-type.

https://github.com/python/typeshed

What does it mean for types to be "a first class member of" a programming language?

[-]

Reasons blog posts can be of higher scientific quality than journal articles

So, schema.org, has classes (C:) -- subclasses of CreativeWork and Article -- for property (P:) domains (D:) and ranges (R:) which cover this domain:

- CreativeWork: http://schema.org/CreativeWork

- - BlogPosting: http://schema.org/BlogPosting

- - Article: http://schema.org/Article

- - - NewsArticle: http://schema.org/NewsArticle

- - - Report: http://schema.org/Report

- - - ScholarlyArticle: http://schema.org/ScholarlyArticle

- - - SocialMediaPosting: http://schema.org/SocialMediaPosting

- - - TechArticle: http://schema.org/TechArticle

Thing: (name, [url], [identifier], [#about], [description[_gh_markdown_html]])

- C: CreativeWork:

- - P: comment R: Comment

- - C: Comment: https://schema.org/Comment

[-]

Ask HN: Is anyone working on CRISPR for happiness?

"studies have found that genetic influences usually account for 35-50% of the variance in happiness measures"

No doubt there are many reasons why this is extremely complicated

[-]

Roadmap to becoming a web developer in 2017

Nice.

- https://github.com/fkling/JSNetworkX would be a cool way to build interactive schema:Thing/CreativeWork curriculum graph visualizations (and BFS/DFS traversal)

- #WebSec: https://wrdrd.com/docs/consulting/web-development#websec

- Web Development Checklist: https://wrdrd.com/docs/consulting/web-development#web-develo...

-- http://webdevchecklist.com/

- | Web Frameworks (GitHub Sphinx wiki (./Makefile)): https://westurner.org/wiki/webframeworks (| Wikipedia, | Homepage, Source, Docs,)

[-]

Ask HN: How do you keep track/save your learnings?(so that you can revisit them)

- Vim Voom: `:Voom rest` , ':Voom markdown`

- Jupyter notebooks

- Sphinx docs: https://wrdrd.com/docs/consulting/research#research-tools src: https://github.com/wrdrd/docs/blob/master/docs/consulting/re...

- Sphinx wiki (./Makefile):

-- Src: https://github.com/westurner/wiki

-- Src: https://github.com/westurner/wiki/wiki

-- Web: https://westurner.org/wiki/workflow

[+]
[+]
[+]
[-]

Ask HN: Criticisms of Bayesian statistics?

In tech circles, it seems that Bayesian statistics is often favored over classical frequentist statistics. In my study of both Bayesian and frequentist statistics, it seems that the results of a Bayesian analysis are generally more intuitive, such as when comparing Bayesian credible intervals to frequentist confidence intervals. It also seems like Bayesian analysis avoids what I think is one of the most serious problems in analysis, the multiple comparisons problem. It's been easy for me to find any number of Bayesian critiques of frequentist stats, but I have rarely seen frequentist defenses against Bayesian stats. This may simply be because I mostly read technology related sites as opposed to more general statistics oriented sites. As such, I would really appreciate hearing some frequentist critiques of Bayesian stats. I feel like the situation can't be as cut and dry as one being better than the other in all things, so I would like to acquire a more balanced perspective by hearing about the other side. Thanks!

~bayesian logicism

https://plato.stanford.edu/entries/logic-inductive/ :

> It is now generally held that the core idea of Bayesian logicism is fatally flawed—that syntactic logical structure cannot be the sole determiner of the degree to which premises inductively support conclusions. [...]

[-]

80,000 Hours career plan worksheet

> What are your best medium-term options (3-15 years)?

> 1. What global problems do you think are most pressing?

The 17 UN Sustainable Development Goals (SDGs) and 169 targets w/ statistical indicators, AKA GlobalGoals, are for the whole world through 2030.

https://en.wikipedia.org/wiki/Sustainable_Development_Goals

http://www.globalgoals.org

"Schema.org: Mission, Project, Goal, Objective, Task" https://news.ycombinator.com/item?id=12525141 could make it easy to connect our local, regional, national, and global goals; and find people with similar objectives and solutions.

[-]

World's first smartphone with a molecular sensor is coming in 2017

> Looking at the back of the phone, you'd be forgiven for thinking the sensor is just the phone's camera. But that odd-looking dual lens is the scanner, basically the embedded version of the SCiO. It uses spectrometry to shine near-infrared light on objects — fruit, liquids, medicine, even your body — to analyze them.

> Say you're at at the supermarket and you want to check how fresh the tomatoes are. Instead of squeezing them, you'd just launch the SCiO app, hold the scanner up to the skin of the tomato, and it will tell you how fresh it is on a visual scale. Do the same thing to your body and you can check your body mass index (BMI). You need to specify the thing you're scanning at the outset, and the actually analysis is performed in the cloud, but the whole process is a matter of seconds, not minutes.

https://en.wikipedia.org/wiki/Spectroscopy

... Tricorder X PRIZE: https://en.wikipedia.org/wiki/Tricorder_X_Prize

[-]

Ask HN: How would one build a business that only develops free software?

So I was reading Richard Stallman's blog on why you should not use google/uber/apple/twitter etc and I understand his reasoning. But what I don't understand is how would one go about building a startup or business that develops and distributes free software only and make good money doing so?

For example, would it be possible to build a free software version of uber/twitter/facebook etc? How would that work?

By removing all restrictions on the software, what is the incentive to not pirate the software? The GPL can be enforced, but that is clearly not practical especially outside the US.

[+]

> The source for Reddit [...]

Src: https://github.com/reddit/reddit /blob/master/r2/setup.py

Docs: https://github.com/reddit/reddit/wiki/Install-guide

"Reddit Enhancement Suite (RES)" is donationware: https://github.com/honestbleeps/Reddit-Enhancement-Suite

"List of Independent GNU social Instances" http://skilledtests.com/wiki/List_of_Independent_GNU_social_...

> [...] the first question you'd have to answer, is how to get people to switch to your service, whether it's free software or otherwise.

"Growth hacking": https://en.wikipedia.org/wiki/Growth_hacking

"Business models for open-source software" https://en.wikipedia.org/wiki/Business_models_for_open-sourc...

...

- https://github.com/google

- https://github.com/GoogleCloudPlatform

- https://github.com/kubernetes/kubernetes (Apache 2.0)

- https://github.com/uber

- https://github.com/apple (Swift is Apache 2.0)

- https://github.com/microsoft

- https://github.com/github

- https://github.com/twitter

- https://github.com/twitter/innovators-patent-agreement

- https://github.com/facebook

...

- "GNU Social" (GNU AGPL v3) https://en.wikipedia.org/wiki/GNU_social

... http://choosealicense.com/appendix/ has a table for comparison of open source software licenses.

http://tinyurl.com/p6mka3k describes Open Source Governance in a chart with two axes (Cathedral / Bazaar , Benevolent Dictator / Formal Meritocracy) ... as distinct from https://en.wikipedia.org/wiki/Open-source_governance , which is the application of open source software principles to government. USDS Playbook advises "Default to open" https://playbook.cio.gov/#play13

Anarchy / Budgeting: https://github.com/WhiteHouse/budgetdata

[-]

Ask HN: If your job involves continually importing CSVs, what industry is it?

I was wondering if people still use CSVs for data exchange now, or if we've mostly moved to JSON and XML.

Arguing for the CSVW (CSV on the Web) W3C Standards:

- "CSV on the Web: A Primer" http://w3c.github.io/csvw/primer/

- Src: https://github.com/w3c/csvw

- Columns have URIs (ideally from a shared RDFS/OWL vocabulary)

- Columns have XSD datatype URIs

- CSVW can be represented as RDF, JSON, JSONLD

With CSV, which extra metadata file describes how many rows at the top are for columnar metadata? (I.e. column labels, property URI, XSD datatype URI, units URI, precision, accuracy, significant figures) ... https://wrdrd.com/docs/consulting/linkedreproducibility#csv-...

... CSVW: https://wrdrd.com/docs/consulting/knowledge-engineering#csvw

  @prefix csvw: <http: csvw#="" ns="" <a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="<a href="http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">http://www.w3.org" rel="nofollow noopener" target="_blank">www.w3.org=""> .
</http:>
@context: http://www.w3.org/ns/csvw.jsonld

[-]

Ask HN: Maybe I kind of suck as a programmer – how do I supercharge my work?

I'm in my late twenties and I'm having a bit of a tough time dealing with my level of programming skill.

Over the past 3 years, I've released a few apps on iOS: not bad, nothing that would amaze anyone here. The code is generally messy and horrible, rife with race conditions and barely holding together in parts. (Biggest: 30k LOC.) While I'm proud of my work — especially design-wise — I feel most of my time was spent on battling stupid bugs. I haven't gained any specialist knowledge — just bloggable API experience. There's nothing I could write a book about.

Meanwhile, when I compulsively dig through one-man frameworks like YapDatabase, Audiobus, or AudioKit, I am left in awe! They're brimming with specialist knowledge. They're incredibly documented and organized. Major features were added over the course of weeks! People have written books about these frameworks, and they were created by my peers — probably alongside other work. Same with one-man apps like Editorial, Ulysses, or GoodNotes.

I am utterly baffled by how knowledgeable and productive these programmers are. If I'm dealing with a new topic, it can take weeks to get a lay of the land, figure out codebase interactions, consider all the edge cases, etc. etc. But the commits for these frameworks show that the devs basically worked through their problems over mere days — to say nothing of getting the overall architecture right from the start. An object cache layer for SQL? Automatic code gen via YAML? MIDI over Wi-Fi? Audio destuttering? Pff, it took me like a month to add copy/paste to my app!

I'm in need of some recalibration. Am I missing something? Is this quality of work the norm, or are these just exceptional programmers? And even if they are, how can I get closer to where they're standing? I don't want to wallow in my mediocrity, but the mountain looks almost insurmountable from here! No matter the financial cost or effort, I want to make amazing things that sustain me financially; but I can't do that if it takes me ten times as long to make a polished product as another dev. How do I get good enough to consistently do work worth writing books about?

For identifying strengths and weaknesses: "Programmer Competency Matrix":

- http://sijinjoseph.com/programmer-competency-matrix/

- https://competency-checklist.appspot.com/

- https://github.com/hltbra/programmer-competency-checklist

... from: https://wrdrd.com/docs/consulting/software-development#compu... )

> How do I get good enough to consistently do work worth writing books about?

- These are great reads: "The Architecture of Open Source Applications" http://aosabook.org/en/

- TDD.

[+]
[-]

Ask HN: Anything Like Carl Sagan's Cosmos for Computer Science?

Is there anything like Carl Sagan's Cosmos that talks about the history of computing in an accessible way? Pondering Christmas gifts for my niece.

Computer #History: https://en.wikipedia.org/wiki/Computer

Outline of Computer Engineering #History of: https://en.wikipedia.org/wiki/Outline_of_computer_engineerin...

History of Computer Science: https://en.wikipedia.org/wiki/History_of_computer_science

Outline of Computer Science: https://en.wikipedia.org/wiki/Outline_of_computer_science

History of the Internet: https://en.wikipedia.org/wiki/History_of_the_Internet

History of the World Wide Web: https://en.wikipedia.org/wiki/History_of_the_World_Wide_Web

... maybe a bit OT; but, interestingly, IDK if any of these include a history section:

#K12CSFramework (Practices, Concepts): https://k12cs.org

- "Impacts of Computing" (Culture; Social Interactions; Safety, Law, and Ethics): https://k12cs.org/framework-statements-by-progression/#jump-...

"Competencies and Tasks on the Path to Human-Level AI" (Perception, Actuation, Memory, Learning, Reasoning, Planning, Attention, Motivation, Emotion, Modeling Self and Other, Social Interaction, Communication, Quantitative, Building/Creation): http://wiki.opencog.org/w/CogPrime_Overview#Competencies_and...

Code.org (#HourOfCode): https://code.org/learn

[+]

No, but particulary more comprehensive and informative than any one video. These links (to #OER) would be useful for anyone intending to try and replicate the form and style of the "Cosmos" video series with Computer Science content.

[+]
[-]

Learn X in Y minutes

[+]
[+]

The source is hosted on GitHub; there's a commit log (for each file and directory): https://github.com/adambard/learnxinyminutes-docs/commits/ma...

[-]

Org mode 9.0 released

[+]
[+]
[+]
[+]
[+]

Filenames may contain newlines. JSON strings may contain newlines.

The modular aspects of the UNIX philosophy are pretty cool; the data interchange format (un-typed \n-delimited strings) is irrational (and

dangerous).

JSON w/ a JSONLD @context and XSD type URIs may also contain newlines (which should be escaped)

Note that, with OSX bash, tab \t must be specified as $'\t'.

And, sometimes, it's \r\n instead of just \n (which is extra-format metadata).

And then Unicode. Oh yeah, unicodë.

[+]

With Ctrl-V <tab>, it's not possible to determine whether it's spaces or tabs (without cursoring over the /s|/t)

When you're parsing a text file, or streaming lines of text delimited with /n, how do downstream programs know whether it's ASCII or unicode?

</tab>

[-]

Ask HN: Best Git workflow for small teams

I have been building up a small team of programmers that are coming from CVS. I am looking for some ideas on ideal workflows.

What do you currently use for teams of 5-10 people?

[+]

+1 for HubFlow (GitFlow > HubFlow).

- https://westurner.org/tools/#hubflow

- Src: https://github.com/datasift/gitflow

- Docs: https://datasift.github.io/gitflow/

-- The git branch diagrams (originally from GitFlow) are extremely helpful: https://datasift.github.io/gitflow/IntroducingGitFlow.html

[-]

TDD Doesn't Work

[+]
[+]
[+]

> I guess if all code written could be seen as an API, TDD would be great, but that's not the world I live in.

If not an "Application Programming Interface", isn't all code an Interface? There's input and there's output.

With Object Oriented programming, that there is an interface is more explicit (even if all you're doing is implementing objects that are already tested). There are function call argument (type) specifications (interfaces) whether it's functional or OO.

[+]
[+]

> How do the people who write them know that they work?

Test first isolates that a given test doesn't already pass (without any additional code).

Test after (but before committing) also seems to require a more thorough critical analysis.

And then someone finally fuzzes the code.

[+]
[+]
[+]
[+]

+1. TDD could be considered as a derivation of the Scientific Method (Hypothesis Testing).

https://en.wikipedia.org/wiki/Scientific_method

https://en.wikipedia.org/wiki/Hypothesis

Test first isolates out a null hypothesis (that the test already passed); but not that it passes/fails because of some other chance variation (e.g. hash randomization and unordered maps).

https://en.wikipedia.org/wiki/Null_hypothesis

... https://en.wikipedia.org/wiki/Test-driven_development

[+]
[-]

Ask HN: How do you organise/integrate all the information in your life?

Hello fellow HNers,

How do you organise your life/work/side projects/todo lists/etc in an integrated way?

We have:

  * To do lists/Reminders
  * Bookmark lists
  * Kanban boards
  * Wikis
  * Financial tools
  * Calenders/Reminders
  * Files on disk
  * General notes
  * ...
However, there must be a better way to get an 'integrated' view on your life? ToDo list managers suck at attaching relevant information; wikis can't do reminders; bookmarks can't keep track of notes and thoughts; etc, and all the above are typically not crosslinked easily, and exporting data for backup/later consumption is hit and miss from various services.

So far, I've found a wiki to be almost the most flexible in keeping all manner of raw information kind of organised, but lacks useful features like reminders, and minimal tagging support, no easy way to keep track of finances, etc.

I understand 'best tool for the job', but there's just so...many...

[+]
[+]

https://en.wikipedia.org/wiki/Org-mode#Integration lists a number of Vim extensions with OrgMode support.

... "What is the best way to avoid getting "Emacs Pinky"?" http://stackoverflow.com/questions/52492/what-is-the-best-wa...

[+]
[+]
[-]

Ask HN: What are the best web tools to build basic web apps as of October 2016?

Questions:

1: Which technologies are popular and what do people like about them? (To help someone deciding between). Seems like React frontend, or perhaps Vue? and Node being the popular backend?

2: Is there a site that keeps track of the various options for frontend and backend frameworks and how their popularity progresses?

[+]

* Backend performance: https://www.techempower.com/benchmarks/

* Frontend examples: https://github.com/tastejs/todomvc

There are tradeoffs between: performance, development speed, trainability (documentation), depth and breadth of developer community, long-term viability (foundation, backers), maintenance (upgrade path), API flexibility (decoupling, cohesion), standards compliance, vulnerability/risk (breadth), out of the box usability, accessibility, ...

So, for example, there are WYSIWYG tools which get like the first 70-80% of the serverside and clientside requirements; and then there's a learning curve (how to do the rest of the app as abstractly as the framework developers). ( If said WYSIWYG tools aren't "round-trip" capable, once you've customized any of the actual code, you have to copy paste (e.g. from a diff) in order to preserve your custom changes and keep using the GUI development tool. )

... Case in point: Django admin covers very many use cases (and is already testable, tested), but often we don't think to look at the source of the admin scaffolding app until we've written one-off models, views, and templates.

- Django Class-based Views abstract alot of the work into already-tested components.

- Django REST Framework has OpenAPI (swagger) and a number of 3rd party authentication and authorization integrations available.

- In a frontend framework (MVVM), ARIA (Accessibility standards), the REST adapter and error handling are important (in addition to the aforementioned criteria (long-term viability, upgrade path)) ... and then we want to do realtime updates (with something like COMET, WebSockets, WebRTC)

Similar features in any framework are important for minimizing re-work. "Are there already tests for a majority of these components?"

[-]

Harvard and M.I.T. Are Sued Over Lack of Closed Captions

[+]
[+]
[+]
[+]

When a student is paying for an education in a federally-funded institution, it's reasonable to expect video-captioning, braille, text-scalable HTML (not PDF) wherever feasible. What about Sign Language interpretation? Simple English?

It would be great if everyone could afford to offer accessible content.

Maybe, instead of paying instructors, all lectures should be typed verbarim - in advance - and delivered by Text-to-Speech software (with gestural scripting and intonation). All in the same voice.

- Ahead-of-time lecture scripts could be used to help improve automated speech recognition accuracy.

- Provide additional support for paid captioning

-- Tools

-- Labor

- Provide support for crowdsourced captioning services

-- Feature: Upvote to prioritize

-- Feature: Flag as garbled

- Develop video-platform-agnostic transcription software (and make it available for free)

-- Desktop offline Speech-to-Text

-- Mobile offline Speech-to-Text

-- Speaker-specific language model training

- Require use of a video-platform with support for automated transcription

-- YouTube

--

- Companies with research in this space:

-- Speech Recognition, [Automated] Transcription, Autocomplete hinting for [Crowd-sourced] captioning

-- IBM

-- Google

--- YouTube has automated transcription

--- Google Voice supports transcription corrections, but AFAIU it's not speaker-specific

-- Baidu

-- Nuance (Dragon,)

--

... Textual lecture transcriptions are useful for everyone; because Ctrl-F to search.

- Label (with RDFa structured data) accessible content to make it easy to find

-- Schema.org accessibility structured data (for Places, Events)

--- https://github.com/schemaorg/schemaorg/issues/254

--- http://schema.org/accessibilityFeature

--- http://schema.org/accessibilityPhysicalFeature and/or

--- http://schema.org/amenityFeature

-- http://schema.org/Course

--- https://github.com/schemaorg/schemaorg/issues/254

- Challenges

-- Funding

-- CPU Time

-- Error Rate

-- Mitigating spam and vandalism

-- Human-verified crowdsourced corrections can/could be used to train recognizing and generative speaker-specific models

-- In the film A.I. (2001), there's a scene where they're asking questions of Robin Williams and the intonation/inflection inadvertantly wastes one of their 3 wishes / requests. https://en.wikipedia.org/wiki/A.I._Artificial_Intelligence

[+]

> [Evasive legalism (Obligations of accepting federal funding, definition of service, funding differentiation, policy harms)]

[Solutions for solving the problem (providing transcripts of lectures) most cost-efficiently]

> - Companies with research in this space:

- Microsoft

-- "conversational speech recognition"

- Apple

[-]

Jack Dorsey Is Losing Control of Twitter

[+]
[+]
[+]

- Are these jounalists working for media companies that are competing for time?

- If they are shareholders, they don't seem to be declaring their conflicts of interest.

- For Twitter to respond would require that Twitter be taking editorial positions regarding the activities of competing media conglomerates. ("You're down, you should all just cash out now" [while you have far more daily active customers and revenue per user than a number of TV channels combined].

- Are there competing international interests and biases? Is the market for noiseless citizen media saturated? How much time is there, really?

[+]
[+]

The conflict of interest is that one media company is publishing negative articles about another media company amidst acquisition talks.

What is their interest here? How do they intend to affect the perceived value of Twitter? Are there sources cited? Data? Figures?

[+]

Bloomberg is a private corporation which sells ads and exercises editorial discretion in publishing market-moving information and/or editorials.

Twitter is a public corporation which hosts Tweets, sells ads, and selects trending Twitter Moments.

Both companies are media services. Both companies compete for ad revenue. Both companies compete for readers' time.

(Medium allows journalists to publish information and/or editorials for free (or, now, for subscription revenue from membership programs)).

Schema.org: Mission, Project, Goal, Objective, Task

[+]

A (search) use cases:

- I want to find an organinization with a project with similar goals and objectives.

- I want to find other objectives linked to indicators.

- I want to say "the schema:Project described in this schema:WebPage is relevant to one or more #GlobalGoals" (so that people searching for ways to help can model similar goal-directed projects)

[+]
[+]

> In the case of physical stores and offices, I would imagine that a check for opening hours or telephone number shouldn't be counted as a pageview -> conversion anyway — they already chose you i.e. converted. Maybe they're a returning customer or somebody who liked the email marketing you sent them.

As well, adding structured data to the page (with RDFa, JSONLD, or Microdata) makes it much easier for voice assistant apps to parse out the data people ask for.

> Google [...]

Schema.org started as a collaborative effort between Bing, Google, Yahoo, and then Yandex. Anyone with a parser can read structured data from an HTML page with RDFa or a JSONLD document.

https://en.wikipedia.org/wiki/Schema.org

[-]

The Open Source Data Science Masters

nns | 2016-08-19 08:12:25 | 95 | # | ^
[+]

>I still can't get over the term "data science", though. Not only is it ridiculously meaningless - what sort of science doesn't involve data, and how often would data be useful to something that isn't scientific at some level - its meaninglessness derives from the hyped buzzword trendiness that drove its upswing.

I couldn't disagree more.

There are a number of terms for domain-independent data analysis:

- data analysis

- statistics

- statistical modeling

- machine learning

- big data

- data journalism

- data science

I think it makes perfect sense that the practice of collecting and analyzing data be qualified and indentified as a specific field.

I know of no better resource than these venn diagrams which identify the 'danger zones' around data science:

- http://datascienceassn.org/content/fourth-bubble-data-scienc...

Is there such a thing as a statistical model which only applies to a certain domain?

Domain knowledge ("substantive expertise"/"social sciences" in the linked venn diagrams) serves only to logically validate statistical models which may be statistically valid but otherwise illogical, in context to currently-available field knowledge (bias).

Regardless of field, the math is the same.

Regardless of field, the model either fits or it doesn't.

Regardless of field, the controls were either sufficient or they weren't.

[-]

We Should Not Accept Scientific Results That Have Not Been Repeated

So, we should have a structured way to represent that one study reproduces another? (e.g. that, with similar controls, the relation between the independent and dependent variables was sufficiently similar)

- RDF is the best way to do this. RDF can be represented as RDFa (RDF in HTML) and as JSON-LD (JSON LinkedData).

... " #LinkedReproducibility "

https://twitter.com/search?q=%23LinkedReproducibility

It isn't/wouldn't be sufficient to, with one triple, say (example.org/studyX, 'reproduces', example.org/studyY); there is a reified relation (an EdgeClass) containing metadata like who asserts that studyX reproduces studyY, when they assert that, and why (similar controls, similar outcome).

Today, we have to compare PDFs of studies and dig through them for links to the actual datasets from which the summary statistics were derived; so specifying who is asserting that studyX reproduces studyY is very relevant.

Ideally, it should be possible to publish a study with structured premises which lead to a conclusion (probably with formats like RDFa and JSON-LD, and a comprehensive schema for logical argumentation which does not yet exist). ("#StructuredPremises")

Most simply, we should be able to say "the study control type URIs match", "the tabular column URIs match", "the samples were representative", and the identified relations were sufficiently within tolerances to say that studyX reproduces studyY.

Doing so in prosaic, parenthetical two-column PDFs is wasteful and shortsighted.

An individual researcher then, builds a set of beliefs about relations between factors in the world from a graph of studies ("#StudyGraph") with various quantitative and qualitative metadata attributes.

As fields, we would then expect our aggregate #StudyGraphs to indicate which relations between dependent and independent variables are relevant to prediction and actionable decision making (e.g. policy, research funding).

[-]

Ask HN: What do you think about the current education system?

Is it good, bad? What can be done better? What problems do you identify? Is it upsetting? are you used to it?

[-]

A Reboot of the Legendary Physics Site ArXiv Could Shape Open Science

[+]

Hypothesis (OpenAnnotation) comments (and highlights!) work for any URL.

https://hypothes.is/

https://hypothes.is/embed.js

http://www.openannotation.org/spec/core/

I've written up a few ideas about PDFs, edges, and reproducibility (in particular); with the Hashtags #LinkedReproducibility (and #MetaResearch)

https://twitter.com/search?q=%23LinkedReproducibility

https://twitter.com/search?q=%23MetaResearch

- schema.org/MedicalTrialDesign enumerations could/should be extended to all of science (and then added to all of these PDFs without structured edge types like e.g. {intendedToReproduce, seemsToReproduce} (which then have specific ensuing discussions))

- http://health-lifesci.schema.org/MedicalTrialDesign

- there should be a way to evaluate controls in a structured, blinded, meta-analytic way

- PDF is pretty, but does not support RDFa (because this is a graph)

... notes here: https://wrdrd.com/docs/consulting/data-science#linked-reprod...

(edit) please feel free to implement any of these ideas (e.g. CC0)

> a way to evaluate controls

a way to evaluate premises (assumptions, controls, data, data transformations) and conclusions (presented as e.g. JSONLD, RDFa in a standard form (potentially like IPython/Jupyter .ipynb; but with an OrderedMap of I/O sequences with fixed #urifragment IDs)

[-]

Principles of good data analysis

Helpful; thanks!

"Ten Simple Rules for Reproducible Computational Research" http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fj...

* Rule 1: For Every Result, Keep Track of How It Was Produced

* Rule 2: Avoid Manual Data Manipulation Steps

* Rule 3: Archive the Exact Versions of All External Programs Used

* Rule 4: Version Control All Custom Scripts

* Rule 5: Record All Intermediate Results, When Possible in Standardized Formats

* Rule 6: For Analyses That Include Randomness, Note Underlying Random Seeds

* Rule 7: Always Store Raw Data behind Plots

* Rule 8: Generate Hierarchical Analysis Output, Allowing Layers of Increasing Detail to Be Inspected

* Rule 9: Connect Textual Statements to Underlying Results

* Rule 10: Provide Public Access to Scripts, Runs, and Results

Sandve GK, Nekrutenko A, Taylor J, Hovig E (2013) Ten Simple Rules for Reproducible Computational Research. PLoS Comput Biol 9(10): e1003285. doi:10.1371/journal.pcbi.1003285

[+]
[+]
[-]

Why Puppet, Chef, Ansible aren't good enough

[+]
[+]
[+]
[+]

* Unique PREFIX; cool. Where do I get signed labels and checksums?

* I fail to see how baking configuration into packages can a) reduce complexity; b) obviate the need for configuration management.

* How do you diff filesystem images when there are unique PREFIXes in the paths?

A salt module would be fun to play around with.

How likely am I to volunteer to troubleshoot your unique statically-linked packages?

[+]
[-]

Python vs Julia – an example from machine learning

1. Where is the source for this benchmark?

2.http://benchmarksgame.alioth.debian.org could be a bit more representative of broad-based algorithmic performance.

3. There are lots of Python libraries for application features other than handpicked algorithms. I would be interested to see benchmarks of the marshaling code in IJulia (IPython with Julia)

[-]

“Don’t Reinvent the Wheel, Use a Framework” They All Say

1. WordPress is an application with a plugin API. It is not a framework.

2. Writing a web application without a framework is a good learning experience. For anything but small-scale local learning experiences, the risks and costs of not working with a framework are significant. [It is probable that I, with my ego, would "do it wrong" and that a community of developers has arrived at a far superior solution.]

One of the best explanations for what advantages a framework offers over basically just writing your own framework I've found is in the Symfony2 Book: "Symfony2 versus Flat PHP". [1]

[1] http://symfony.com/doc/current/book/from_flat_php_to_symfony...

[-]

PEP 450: Adding A Statistics Module To The Standard Library

[+]
[+]
[+]

So let's amortize the cost of compiling and/or installing fast binaries by only relying on plain Python.

It would be great if there was a natural progression (and/or compat shims) for porting from this new stdlib library to NumPy[Py] (and/or from LibreOffice). (e.g. "Is it called 'cummean'")?

[+]
[-]

Functional Programming with Python

[+]

is this a reveal.js rendering of an IPython notebook ?

    ipython nbconvert --to slides <notebook.ipynb>
</notebook.ipynb>
http://ipython.org/ipython-doc/dev/interactive/nbconvert.htm...

[-]

PEP 8 Modernisation

[+]
[+]

Funny, someone was just talking about 79 characters per line in regards to using soft tabs for editor display consistency, the other day.

http://www.reddit.com/r/java/comments/1j7iv4/would_it_not_be...

These are useful for static code analysis and finding congruence with typesetting conventions:

https://pypi.python.org/pypi/flake8

https://pypi.python.org/pypi/condent

https://pypi.python.org/pypi/pep8ify

[-]

Useful Unix commands for data science

The "Text Processing" category of this list of unix utilities is also helpful: http://en.wikipedia.org/wiki/List_of_Unix_programs

BashReduce is a pretty cool application of many of these utilities.

[-]

The data visualization community needs its own Hacker News

[+]

reddit.com/r/dataisbeautiful

/r/d3js , /r/visualization , /r/Infographics

schema.org/ > Thing > CreativeWork > { Article, Dataset, DataCatalog, MediaObject, and CollectionPage } may also be helpful.

[-]

Ask HN: Intermediate Python learning resources?

So I've completed Codecademy's course on Python, I have some experience fiddling with Flask and putting together random Python scripts. Generally, when I want to build something that I've never built before, I look up how to do it on Stackoverflow and manage to understand most of the things.

How can I take my knowledge to the next level?

Free learning resources are preferred. Hopefully ones you have used yourself when in my position.

Thanks!

[+]
[+]

The Green Tea Press books are great; and free.

Think Python: How To Think Like a Computer Scientist http://www.greenteapress.com/thinkpython/thinkpython.html

Think Complexity: Exploring Complexity Science with Python : http://www.greenteapress.com/compmod/

Think Stats: Probability and Statistics for Programmers : http://www.greenteapress.com/thinkstats/index.html

You can search announced, in progress, future, self-paced, and finished MOOCs (Massive Open Online Courses) with class-central.com : http://www.class-central.com/search?q=python

[-]

Ansible Simply Kicks Ass

[+]
[+]
[+]
[+]
[+]
[-]

Python-Based Tools for the Space Science Community

[+]

The Python installation tool utilized to install different versions of Anaconda and component packages is called [conda](http://docs.continuum.io/conda/intro.html). [pythonbrew]( https://github.com/utahta/pythonbrew) in combination with [virtualenvwrapper](http://virtualenvwrapper.readthedocs.org/en/latest/) is also great.

[-]

JSON API

In terms of http://en.wikipedia.org/wiki/Linked_data , there are a number of standard (overlapping) URI-based schema for describing data with structured attributes:

* http://schema.org/docs/full.html

* http://schema.rdfs.org/all.json

* http://schema.rdfs.org/all.ttl (Turtle RDF Triples)

* http://rdfs.org/sioc/spec/

* http://json-ld.org/

* http://json-ld.org/spec/latest/json-ld/

* http://json-ld.org/spec/latest/json-ld-api/

* http://www.w3.org/TR/ldp/ Linked Data Platform TR defines a RESTful API standard

* http://wiki.apache.org/incubator/MarmottaProposal implements LDP 1.0 Draft and SPARQL 1.1

[-]

Norton Ghost discontinued

[+]
[+]
[+]